Markov Decision Processes and Reinforcement Learning
Cambridge University Press (Verlag)
978-1-009-09841-0 (ISBN)
- Noch nicht erschienen (ca. März 2027)
- Versandkostenfrei
- Auch auf Rechnung
- Artikel merken
This book offers a comprehensive introduction to Markov decision process and reinforcement learning fundamentals using common mathematical notation and language. Its goal is to provide a solid foundation that enables readers to engage meaningfully with these rapidly evolving fields. Topics covered include finite and infinite horizon models, partially observable models, value function approximation, simulation-based methods, Monte Carlo methods, and Q-learning. Rigorous mathematical concepts and algorithmic developments are supported by numerous worked examples. As an up-to-date successor to Martin L. Puterman's influential 1994 textbook, this volume assumes familiarity with probability, mathematical notation, and proof techniques. It is ideally suited for students, researchers, and professionals in operations research, computer science, engineering, and economics.
Martin L. Puterman is Professor Emeritus at the Sauder School of Business, University of British Columbia. He received the INFORMS Lanchester Prize for his widely cited 1994 book Markov Decision Processes. He is an INFORMS Fellow and has received the CORS Award of Merit, the CORS Practice Prize and the INFORMS Case Competition Award. Timothy C. Y. Chan is Associate Vice-President and Vice-Provost, Strategic Initiatives and Professor of Industrial Engineering at the University of Toronto. He is an award-winning teacher, having been recognized with the INFORMS Prize for Teaching of OR/MS Practice, the INFORMS Case Competition Award, and the University of Toronto President's Teaching Award.
Preface; 1. Introduction; Part I. Fundamentals: 2. Markov decision process fundamentals; 3. Examples and applications; Part II. Classical Markov Decision Process Models: 4. Finite horizon models; 5. Infinite horizon models: expected discounted reward; 6. Infinite horizon models: expected total reward; 7. Infinite horizon models: long-run average reward; 8. Partially observable Markov decision processes; Part III. Reinforcement Learning: 9. Value function approximation; 10. Simulation in tabular models; 11. Simulation with function approximation; Appendix A. Notation and conventions; Appendix B. Markov chains; Appendix C. Linear programming; Bibliography; Index.
| Erscheint lt. Verlag | 1.3.2027 |
|---|---|
| Zusatzinfo | Worked examples or Exercises |
| Verlagsort | Cambridge |
| Sprache | englisch |
| Gewicht | 500 g |
| Themenwelt | Mathematik / Informatik ► Mathematik ► Angewandte Mathematik |
| Mathematik / Informatik ► Mathematik ► Finanz- / Wirtschaftsmathematik | |
| ISBN-10 | 1-009-09841-1 / 1009098411 |
| ISBN-13 | 978-1-009-09841-0 / 9781009098410 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
aus dem Bereich