Hardware Architectures for Deep Learning
Institution of Engineering and Technology (Verlag)
978-1-78561-768-3 (ISBN)
This book presents and discusses innovative ideas in the design, modelling, implementation, and optimization of hardware platforms for neural networks.
The rapid growth of server, desktop, and embedded applications based on deep learning has brought about a renaissance in interest in neural networks, with applications including image and speech processing, data analytics, robotics, healthcare monitoring, and IoT solutions. Efficient implementation of neural networks to support complex deep learning-based applications is a complex challenge for embedded and mobile computing platforms with limited computational/storage resources and a tight power budget. Even for cloud-scale systems it is critical to select the right hardware configuration based on the neural network complexity and system constraints in order to increase power- and performance-efficiency.
Hardware Architectures for Deep Learning provides an overview of this new field, from principles to applications, for researchers, postgraduate students and engineers who work on learning-based services and hardware platforms.
Masoud Daneshtalab is a tenured associate professor at Mälardalen University (MDH) in Sweden, an adjunct professor at Tallinn University of Technology (TalTech) in Estonia, and sits on the board of directors of Euromicro. His research interests include interconnection networks, brain-like computing, and deep learning architectures. He has published over 300-refereed papers. Mehdi Modarressi is an assistant professor at the Department of Electrical and Computer Engineering, University of Tehran, Iran. He is the founder and director of the Parallel and Network-based Processing research laboratory at the University of Tehran, where he leads several industrial and research projects on deep learning-based embedded system design and implementation.
Part I: Deep learning and neural networks: concepts and models
Chapter 1: An introduction to artificial neural networks
Chapter 2: Hardware acceleration for recurrent neural networks
Chapter 3: Feedforward neural networks on massively parallel architectures
Part II: Deep learning and approximate data representation
Chapter 4: Stochastic-binary convolutional neural networks with deterministic bit-streams
Chapter 5: Binary neural networks
Part III: Deep learning and model sparsity
Chapter 6: Hardware and software techniques for sparse deep neural networks
Chapter 7: Computation reuse-aware accelerator for neural networks
Part IV: Convolutional neural networks for embedded systems
Chapter 8: CNN agnostic accelerator design for low latency inference on FPGAs
Chapter 9: Iterative convolutional neural network (ICNN): an iterative CNN solution for low power and real-time systems
Part V: Deep learning on analog accelerators
Chapter 10: Mixed-signal neuromorphic platform design for streaming biomedical signal processing
Chapter 11: Inverter-based memristive neuromorphic circuit for ultra-low-power IoT smart applications
| Erscheinungsdatum | 28.04.2020 |
|---|---|
| Reihe/Serie | Materials, Circuits and Devices |
| Verlagsort | Stevenage |
| Sprache | englisch |
| Maße | 156 x 234 mm |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| ISBN-10 | 1-78561-768-0 / 1785617680 |
| ISBN-13 | 978-1-78561-768-3 / 9781785617683 |
| Zustand | Neuware |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
aus dem Bereich