Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

LLM Design Patterns (eBook)

A Practical Guide to Building Robust and Efficient AI Systems

(Autor)

eBook Download: EPUB
2025
538 Seiten
Packt Publishing (Verlag)
978-1-83620-702-3 (ISBN)

Lese- und Medienproben

LLM Design Patterns - Ken Huang
Systemvoraussetzungen
29,99 inkl. MwSt
(CHF 29,30)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment.
You'll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems.
By the end of this book, you'll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values.


Explore reusable design patterns, including data-centric approaches, model development, model fine-tuning, and RAG for LLM application development and advanced prompting techniquesFree with your book: PDF Copy, AI Assistant, and Next-Gen ReaderKey FeaturesLearn comprehensive LLM development, including data prep, training pipelines, and optimizationExplore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agentsImplement evaluation metrics, interpretability, and bias detection for fair, reliable modelsBook DescriptionThis practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment. You ll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems. By the end of this book, you ll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values. What you will learnImplement efficient data prep techniques, including cleaning and augmentationDesign scalable training pipelines with tuning, regularization, and checkpointingOptimize LLMs via pruning, quantization, and fine-tuningEvaluate models with metrics, cross-validation, and interpretabilityUnderstand fairness and detect bias in outputsDevelop RLHF strategies to build secure, agentic AI systemsWho this book is forThis book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must.]]>

Preface


Imagine building a skyscraper without blueprints—every floor constructed on the fly, with no clear plan to ensure stability, efficiency, or even functionality. Developing large language models (LLMs) without a structured approach can feel much the same. These powerful models, capable of transforming industries and redefining human–computer interactions, are intricate structures that demand meticulous planning and execution. Without a framework to navigate their complexities, practitioners risk creating systems that are inefficient, unreliable, or unable to meet their potential.

This book, LLM Design Patterns, provides the blueprints you need. It is a practical guide for engineers, researchers, and innovators seeking to design, build, and implement LLMs effectively. It focuses on four critical pillars: preparing and preprocessing data, training and optimizing models, evaluating and interpreting their behavior, and integrating them seamlessly with advanced knowledge retrieval techniques. These domains are explored through the lens of design patterns, offering proven solutions to recurring challenges in LLM development.

The rapid evolution of LLMs brings both extraordinary opportunities and daunting challenges. Issues such as data quality, scalability, and interpretability demand adaptive methodologies and innovative strategies. This book equips practitioners at all levels with the design patterns to address these challenges head-on, providing actionable insights and frameworks to not only build models but excel in the rapidly advancing world of LLMs. Whether you’re constructing your first model or refining a cutting-edge application, this book ensures that your approach is as robust as the technology you seek to harness.

Who this book is for


This book is for anyone involved in the development, deployment, or application of LLMs, including the following:

  • AI engineers and researchers: Individuals implementing LLM techniques in their projects
  • Data scientists and machine learning practitioners: Professionals seeking guidance on data preparation, model training, and optimization for LLMs
  • Software architects and project managers: Those aiming to structure and manage LLM-based projects, ensuring alignment with business and technical objectives

What this book covers


Chapter 1, Introduction to LLM Design Patterns, provides a foundational understanding of LLMs and introduces the critical role of design patterns in their development.

Chapter 2, Data Cleaning for LLM Training, equips you with practical tools and techniques that allow you to effectively clean your data for LLM training.

Chapter 3, Data Augmentation, helps you understand the data augmentation pattern in depth, from increasing the diversity of your training dataset to maintaining its integrity.

Chapter 4, Handling Large Datasets for LLM Training, allows you to learn advanced techniques for managing and processing massive datasets essential for training state-of-the-art LLMs.

Chapter 5, Data Versioning, shows you how to implement effective data versioning strategies for LLM development.

Chapter 6, Dataset Annotation and Labeling, lets you explore advanced techniques for creating well-annotated datasets that can significantly impact your LLM’s performance across various tasks.

Chapter 7, Training Pipeline, helps you understand the key components of an LLM training pipeline, from data ingestion and preprocessing to model architecture and optimization strategies.

Chapter 8, Hyperparameter Tuning, demonstrates what the hyperparameters in LLMs are and strategies for optimizing them efficiently.

Chapter 9, Regularization, shows you different regularization techniques that are specifically tailored to LLMs.

Chapter 10, Checkpointing and Recovery, outlines strategies for determining optimal checkpoint frequency, efficient storage formats for large models, and techniques for recovering from various types of failures.

Chapter 11, Fine-Tuning, teaches you effective strategies for fine-tuning pre-trained language models.

Chapter 12, Model Pruning, lets you explore model pruning techniques, designed to reduce model size while maintaining performance.

Chapter 13, Quantization, gives you a look into quantization methods that can optimize LLMs for deployment on resource-constrained devices.

Chapter 14, Evaluation Metrics, explores the most recent and commonly used benchmarks for evaluating LLMs across various domains.

Chapter 15, Cross-Validation, shows you how to explore cross-validation strategies specifically designed for LLMs.

Chapter 16, Interpretability, helps you understand how interpretability in LLMs refers to the model’s ability to understand and explain how the model processes inputs and generates outputs.

Chapter 17, Fairness and Bias Detection, demonstrates that fairness in LLMs involves ensuring that the model’s outputs and decisions do not discriminate against or unfairly treat individuals or groups based on protected attributes.

Chapter 18, Adversarial Robustness, helps you understand that adversarial attacks on LLMs are designed to manipulate the model’s output by making small, often imperceptible changes to the input.

Chapter 19, Reinforcement Learning from Human Feedback, takes you through a powerful technique for aligning LLMs with human preferences.

Chapter 20, Chain-of-Thought Prompting, demonstrates how you can leverage chain-of-thought prompting to improve your LLM’s performance on complex reasoning tasks.

Chapter 21, Tree-of-Thoughts Prompting, allows you to implement tree-of-thoughts prompting to tackle complex reasoning tasks with your LLMs.

Chapter 22, Reasoning and Acting, teaches you about the ReAct framework, a powerful technique for prompting your LLMs to not only reason through complex scenarios but also plan and simulate the execution of actions, similar to how humans operate in the real world.

Chapter 23, Reasoning WithOut Observation, teaches you the framework for providing LLMs with the ability to reason about hypothetical situations and leverage external tools effectively.

Chapter 24, Reflection Techniques, demonstrates reflection in LLMs, which refers to a model’s ability to analyze, evaluate, and improve its own outputs.

Chapter 25, Automatic Multi-Step Reasoning and Tool Use, helps you understand how automatic multi-step reasoning and tool use significantly expand the problem-solving capabilities of LLMs, enabling them to tackle complex, real-world tasks.

Chapter 26, Retrieval-Augmented Generation, takes you through a technique that enhances the performance of Al models, particularly in tasks that require knowledge or data not contained within the model’s pre-trained parameters.

Chapter 27, Graph-Based RAG, shows how to leverage graph-structured knowledge in RAG for LLMs.

Chapter 28, Advanced RAG, demonstrates how you can move beyond these basic RAG methods and explore more sophisticated techniques designed to enhance LLM performance across a wide range of tasks.

Chapter 29, Evaluating RAG Systems, equips you with the knowledge necessary to assess the ability of RAG systems to produce accurate, relevant, and factually grounded responses.

Chapter 30, Agentic Patterns, shows you how agentic Al systems using LLMs can be designed to operate autonomously, make decisions, and take actions to achieve specified goals.

To get the most out of this book


To get the most out of this book, you should ideally have a foundational understanding of machine learning concepts and basic proficiency in Python programming. These prerequisites will help in grasping the technical methodologies and implementation strategies discussed throughout the chapters. Machine learning knowledge is essential for understanding key aspects of LLM development, such...

Erscheint lt. Verlag 30.5.2025
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 1-83620-702-6 / 1836207026
ISBN-13 978-1-83620-702-3 / 9781836207023
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Ohne DRM)

Digital Rights Management: ohne DRM
Dieses eBook enthält kein DRM oder Kopier­schutz. Eine Weiter­gabe an Dritte ist jedoch rechtlich nicht zulässig, weil Sie beim Kauf nur die Rechte an der persön­lichen Nutzung erwerben.

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Die Grundlage der Digitalisierung

von Knut Hildebrand; Michael Mielke; Marcus Gebauer

eBook Download (2025)
Springer Fachmedien Wiesbaden (Verlag)
CHF 29,30
Die materielle Wahrheit hinter den neuen Datenimperien

von Kate Crawford

eBook Download (2024)
C.H.Beck (Verlag)
CHF 17,55