Machine Unlearning for Governance of Foundation Models
Springer (Verlag)
9783032172815 (ISBN)
- Noch nicht erschienen (ca. März 2026)
- Versandkostenfrei
- Auch auf Rechnung
- Artikel merken
This book provides a systematic and in-depth introduction to machine unlearning (MU) for foundation models, framed through an optimization model data tri-design perspective and complemented by assessments and applications. As foundation models are continuously adapted and reused, the ability to selectively remove unwanted data, knowledge, or model behavior, without full retraining, poses new theoretical and practical challenges. Thus, MU has become a critical capability for trustworthy, deployable, and regulation-ready artificial intelligence. From the optimization viewpoint, this book treats unlearning as a multi-objective and often adversarial problem that must simultaneously enforce targeted forgetting, preserve model utility, resist recovery attacks, and remain computationally efficient. From the model perspective, the book examines how knowledge is distributed across layers and latent subspaces, motivating modular and localized unlearning. From the data perspective, the book explores forget-set construction, data attribution, corruption, and coresets as key drivers of reliable forgetting.
Bridging theory and practice, the book also provides a comprehensive review of benchmark datasets and evaluation metrics for machine unlearning, critically examining their strengths and limitations. The authors further survey a wide range of applications in computer vision and large language models, including AI safety, privacy, fairness, and industrial deployment, highlighting why post-training model modification is often preferred over repeated retraining in real-world systems. By unifying optimization, model, data, evaluation, and application perspectives, this book offers both a foundational framework and a practical toolkit for designing machine unlearning methods that are effective, robust, and ready for large-scale, regulated deployment.
Sijia Liu, Ph.D, is a Red Cedar Distinguished Associate Professor in the Department of Computer Science and Engineering at Michigan State University (MSU), Principal Investigator of the OPTML Lab, and an Affiliated Professor at the MIT-IBM Watson AI Lab, IBM Research. His research focuses on scalable and trustworthy AI, spanning both foundational and use-inspired aspects. Examples include machine unlearning for vision and language models, scalable optimization for deep models, adversarial robustness, and data model efficiency. He is a co-author of the textbook Introduction to Foundation Models (Springer, 2024). His honors include the NSF CAREER Award, the INNS Aharon Katzir Young Investigator Award, MSU s Withrow Rising Scholar Award, Best Paper Runner-Up at UAI (2022), and Best Student Paper Award at ICASSP (2017). He co-founded the New Frontiers in Adversarial Machine Learning Workshop series (ICML/NeurIPS 2021 2024) and has delivered tutorials on trustworthy and scalable ML and their applications at major AI/ML/CV conferences.
Yang Liu, Ph.D., is an Associate Professor of Computer Science and Engineering at UC Santa Cruz. His research focuses on developing fair and robust machine learning algorithms to tackle the challenges of biased and shifting data. He is a recipient of the NSF CAREER Award. He has been selected to participate in several high-profile projects, including NSF-Amazon Fairness in AI, DARPA SCORE, and IARPA HFC. His recent work on trustworthy ML has been recognized with four best paper awards from workshops co-located with ICML/ICLR/IJCAI.
Nathalie Baracaldo is a Senior Research Scientist and Master Inventor at IBM Research in San Jose, California. Her research focuses on safeguarding generative AI models through a variety of techniques, including unlearning. She has extensive experience delivering impactful machine learning solutions that are highly accurate, withstand adversarial attacks, and protect data privacy. She served as the primary investigator for the DARPA GARD program, where her focus was to ensure her team extended and maintained the Adversarial Robustness Toolbox (ART) to support red teaming evaluations. She also led the IBM federated learning effort and co-edited the book Federated Learning: A Comprehensive Overview of Methods and Applications (Springer, 2022). In 2020 and 2021, she received the IBM Master Inventor distinction and the Corporate Technical Recognition, respectively. Her research has been published in top conferences in the fields of AI and Security and has received multiple best paper awards and numerous citations. She received her doctorate degree from the University of Pittsburgh.
Introduction.- Concept Dissection of MU.- Algorithmic Foundations of MU.- Evaluation Metrics and Methods of MU.- Applications.- Conclusion and Prospects.
| Erscheint lt. Verlag | 31.3.2026 |
|---|---|
| Reihe/Serie | Synthesis Lectures on Computer Vision |
| Zusatzinfo | X, 140 p. |
| Verlagsort | Cham |
| Sprache | englisch |
| Maße | 168 x 240 mm |
| Themenwelt | Informatik ► Netzwerke ► Sicherheit / Firewall |
| Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik | |
| Schlagworte | Data-model Interactions • Foundation model • Machine Unlearning • model governance • Privacy and Security • trustworthy AI • Vision and Language Domains |
| ISBN-13 | 9783032172815 / 9783032172815 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
aus dem Bereich