Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
AI-Native LLM Security - Vaibhav Malik, Ken Huang, Ads Dawson

AI-Native LLM Security

Threats, defenses, and best practices for building safe and trustworthy AI
Buch | Softcover
416 Seiten
2025
Packt Publishing Limited (Verlag)
978-1-83620-375-9 (ISBN)
CHF 66,30 inkl. MwSt
Unlock the secrets to safeguarding AI by exploring the top risks, essential frameworks, and cutting-edge strategies—featuring the OWASP Top 10 for LLM Applications and Generative AI

DRM-free PDF version + access to Packt's next-gen Reader*

Key Features

Understand adversarial AI attacks to strengthen your AI security posture effectively
Leverage insights from LLM security experts to navigate emerging threats and challenges
Implement secure-by-design strategies and MLSecOps practices for robust AI system protection
Purchase of the print or Kindle book includes a free PDF eBook

Book DescriptionAdversarial AI attacks present a unique set of security challenges, exploiting the very foundation of how AI learns. This book explores these threats in depth, equipping cybersecurity professionals with the tools needed to secure generative AI and LLM applications. Rather than skimming the surface of emerging risks, it focuses on practical strategies, industry standards, and recent research to build a robust defense framework.
Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You’ll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs.
Built on the expertise of its co-authors—pioneers in the OWASP Top 10 for LLM applications—this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI technologies with confidence and clarity.

*Email sign-up and proof of purchase required

What you will learn

Understand unique security risks posed by LLMs
Identify vulnerabilities and attack vectors using threat modeling
Detect and respond to security incidents in operational LLM deployments
Navigate the complex legal and ethical landscape of LLM security
Develop strategies for ongoing governance and continuous improvement
Mitigate risks across the LLM life cycle, from data curation to operations
Design secure LLM architectures with isolation and access controls

Who this book is forThis book is essential for cybersecurity professionals, AI practitioners, and leaders responsible for developing and securing AI systems powered by large language models. Ideal for CISOs, security architects, ML engineers, data scientists, and DevOps professionals, it provides insights on securing AI applications. Managers and executives overseeing AI initiatives will also benefit from understanding the risks and best practices outlined in this guide to ensure the integrity of their AI projects. A basic understanding of security concepts and AI fundamentals is assumed.

Vaibhav Malik is a security leader with over 14 years of experience in industry. He partners with global technology leaders to architect and deploy comprehensive security solutions for enterprise clients worldwide. As a recognized thought leader in Zero Trust Security Architecture, Vaibhav brings deep expertise from previous roles at leading service providers and security companies, where he guided Fortune 500 organizations through complex network, security, and cloud transformation initiatives. Vaibhav champions an identity and data-centric approach to cybersecurity and is a frequent speaker at industry conferences. He holds a Master's degree in Networking from the University of Colorado Boulder, an MBA from the University of Illinois Urbana-Champaign, and maintains his CISSP certification. His extensive hands-on experience and strategic vision make him a trusted advisor for organizations navigating today's evolving threat landscape and implementing modern security architectures. Ken Huang is a prolific author and renowned expert in AI and Web3, with numerous published books spanning business and technical guides as well as cutting-edge research. He is a Research Fellow and Co-Chair of the AI Safety Working Groups at the Cloud Security Alliance, Co-Chair of the OWASP AIVSS project, and Co-Chair of the AI STR Working Group at the World Digital Technology Academy. He is also an Adjunct Professor at the University of San Francisco, where he teaches a graduate course on Generative AI for Data Security. Huang serves as CEO and Chief AI Officer (CAIO) of DistributedApps.ai, a firm specializing in generative AI-related training and consulting. His technical leadership is further reflected in his role as a core contributor to OWASP's Top 10 Risks for LLM Applications and his participation in the NIST Generative AI Public Working Group. A globally sought-after speaker, Ken has presented at events hosted by RSA, OWASP, ISC2, Davos WEF, ACM, IEEE, Consensus, the CSA AI Summit, the Depository Trust & Clearing Corporation, and the World Bank. He is also a member of the OpenAI Forum, contributing to global dialogue on secure and responsible AI development. Ads Dawson is a self-described “meticulous dude” who lives by the philosophy: Harness code to conjure creative chaos—think evil; do good. He is a recognized expert in offensive AI security, specializing in adversarial machine learning exploitation and autonomous red teaming, with a talent for demonstrating capabilities in offensive security focused tasks using agents. As Staff AI Security Researcher at Dreadnode and founding Technical Lead for the OWASP LLM Applications Project, he architects next-gen evaluation harnesses for cyber operations and AI red teaming. Located in Toronto, Canada and an avid bug bounty hunter, he bridges traditional AppSec with cutting-edge AI vulnerability research, positioning him among the few experts capable of conducting full-spectrum adversarial assessments across AI-integrated critical systems.

Table of Contents

Fundamentals and Introduction to Large Language Models
Securing Large Language Models
The Dual Nature of LLM Risks: Inherent Vulnerabilities and Malicious Actors
Mapping Trust Boundaries in LLM Architectures
Aligning LLM Security with Organizational Objectives and Regulatory Landscapes
Identifying and Prioritizing LLM Security Risks with OWASP
Diving Deep: Profiles of the Top 10 LLM Security Risks
Mitigating LLM Risks: Strategies and Techniques for Each OWASP Category
Adapting the OWASP Top 10 to Diverse Deployment Scenarios
Designing LLM Systems for Security: Architecture, Controls, and Best Practices
Integrating Security into the LLM Development Life Cycle: From Data Curation to Deployment
Operational Resilience: Monitoring, Incident Response, and Continuous Improvement
The Future of LLM Security: Emerging Threats, Promising Defenses, and the Path Forward
Appendix A
Appendix B

Erscheinungsdatum
Verlagsort Birmingham
Sprache englisch
Maße 191 x 235 mm
Themenwelt Informatik Netzwerke Sicherheit / Firewall
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 1-83620-375-6 / 1836203756
ISBN-13 978-1-83620-375-9 / 9781836203759
Zustand Neuware
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Das Lehrbuch für Konzepte, Prinzipien, Mechanismen, Architekturen und …

von Norbert Pohlmann

Buch | Softcover (2022)
Springer Vieweg (Verlag)
CHF 53,15
Management der Informationssicherheit und Vorbereitung auf die …

von Michael Brenner; Nils gentschen Felde; Wolfgang Hommel

Buch (2024)
Carl Hanser (Verlag)
CHF 97,95