Transfer Learning for Multiagent Reinforcement Learning Systems
Seiten
2021
Morgan & Claypool Publishers (Verlag)
978-1-63639-136-6 (ISBN)
Morgan & Claypool Publishers (Verlag)
978-1-63639-136-6 (ISBN)
Surveys the literature on knowledge reuse in multiagent Reinforcement Learning. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area.
Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.
However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.
This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.
This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.
Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.
However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.
This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.
This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.
Preface
Acknowledgments
Introduction
Background
Taxonomy
Intra-Agent Transfer Methods
Inter-Agent Transfer Methods
Experiment Domains and Applications
Current Challenges
Resources
Conclusion
Bibliography
Authors' Biographies
| Erscheinungsdatum | 22.06.2021 |
|---|---|
| Reihe/Serie | Synthesis Lectures on Artificial Intelligence and Machine Learning |
| Verlagsort | San Rafael |
| Sprache | englisch |
| Maße | 191 x 235 mm |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| ISBN-10 | 1-63639-136-2 / 1636391362 |
| ISBN-13 | 978-1-63639-136-6 / 9781636391366 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
Eine praxisorientierte Einführung
Buch | Softcover (2025)
Springer Vieweg (Verlag)
CHF 53,15
Künstliche Intelligenz, Macht und das größte Dilemma des 21. …
Buch | Softcover (2025)
C.H.Beck (Verlag)
CHF 25,20
Buch | Softcover (2025)
Reclam, Philipp (Verlag)
CHF 11,20