Hands-On LLM Serving and Optimization
Hosting LLMs at Scale
Seiten
2026
O'Reilly Media (Verlag)
979-8-3416-2149-7 (ISBN)
O'Reilly Media (Verlag)
979-8-3416-2149-7 (ISBN)
- Titel nicht im Sortiment
- Artikel merken
As the demand for real-time AI applications grows, along comes this comprehensive guide to the complexities of deploying and optimizing LLMs at scale. The authors take a real-world approach backed by practical examples and code, and assemble essential strategies for designing infrastructures that are equal to the demands of modern AI applications.
Large language models (LLMs) are rapidly becoming the backbone of AI-driven applications. Without proper optimization, however, LLMs can be expensive to run, slow to serve, and prone to performance bottlenecks. As the demand for real-time AI applications grows, along comes Hands-On Serving and Optimizing LLM Models, a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.
In this hands-on book, authors Chi Wang and Peiheng Hu take a real-world approach backed by practical examples and code, and assemble essential strategies for designing robust infrastructures that are equal to the demands of modern AI applications. Whether you're building high-performance AI systems or looking to enhance your knowledge of LLM optimization, this indispensable book will serve as a pillar of your success.
Learn the key principles for designing a model-serving system tailored to popular business scenarios
Understand the common challenges of hosting LLMs at scale while minimizing costs
Pick up practical techniques for optimizing LLM serving performance
Build a model-serving system that meets specific business requirements
Improve LLM serving throughput and reduce latency
Host LLMs in a cost-effective manner, balancing performance and resource efficiency
Large language models (LLMs) are rapidly becoming the backbone of AI-driven applications. Without proper optimization, however, LLMs can be expensive to run, slow to serve, and prone to performance bottlenecks. As the demand for real-time AI applications grows, along comes Hands-On Serving and Optimizing LLM Models, a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.
In this hands-on book, authors Chi Wang and Peiheng Hu take a real-world approach backed by practical examples and code, and assemble essential strategies for designing robust infrastructures that are equal to the demands of modern AI applications. Whether you're building high-performance AI systems or looking to enhance your knowledge of LLM optimization, this indispensable book will serve as a pillar of your success.
Learn the key principles for designing a model-serving system tailored to popular business scenarios
Understand the common challenges of hosting LLMs at scale while minimizing costs
Pick up practical techniques for optimizing LLM serving performance
Build a model-serving system that meets specific business requirements
Improve LLM serving throughput and reduce latency
Host LLMs in a cost-effective manner, balancing performance and resource efficiency
| Erscheint lt. Verlag | 30.4.2026 |
|---|---|
| Verlagsort | Sebastopol |
| Sprache | englisch |
| Maße | 178 x 232 mm |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| ISBN-13 | 979-8-3416-2149-7 / 9798341621497 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Buch | Softcover (2025)
Reclam, Philipp (Verlag)
CHF 11,20
die materielle Wahrheit hinter den neuen Datenimperien
Buch | Hardcover (2024)
C.H.Beck (Verlag)
CHF 44,75
Künstliche Intelligenz, Macht und das größte Dilemma des 21. …
Buch | Softcover (2025)
C.H.Beck (Verlag)
CHF 25,20