Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Inside the Message Passing Interface - Alexander Supalov

Inside the Message Passing Interface

Creating Fast Communication Libraries
Buch | Softcover
384 Seiten
2018
De|G Press (Verlag)
978-1-5015-1554-5 (ISBN)
CHF 146,90 inkl. MwSt
  • Versand in 10-15 Tagen
  • Versandkostenfrei
  • Auch auf Rechnung
  • Artikel merken
A hands-on guide to writing a Message Passing Interface, this book takes the reader on a tour across major MPI implementations, best optimization techniques, application relevant usage hints, and a historical retrospective of the MPI world, all based on a quarter of a century spent inside MPI. Readers will learn to write MPI implementations from scratch, and to design and optimize communication mechanisms using pragmatic subsetting as the guiding principle. Inside the Message Passing Interface also covers MPI quirks and tricks to achieve best performance. Dr. Alexander Supalov created the Intel Cluster Tools product line, including the Intel MP Library that he designed and led between 2003 and 2015. He invented the common MPICH ABI and also guided Intel efforts in the MPI Forum during the development of the MPI-2.1, MPI-2.2, and MPI-3 standards. Before that, Alexander designed new finite-element mesh-generation methods, contributing to the PARMACS and PARASOL interfaces, and developed the first full MPI-2 and IMPI implementations in the world. He graduated from the Moscow Institute of Physics and Technology in 1990, and earned his PhD in applied mathematics at the Institute of Numerical Mathematics of the Russian Academy of Sciences in 1995. Alexander holds 26 patents (more pending worldwide).

Dr. Alexander Supalov, Supalov HPC, Germany

Introduction – Learn what expects you inside the book





What this book is about





Who should read this book





Notation and conventions





How to read this book









Overview







Parallel computer







Intraprocessor parallelism





Interprocessor parallelism





Exercises







MPI standard







MPI history





Related standards





Exercises







MPI subsetting







Motivation





Typical examples





Implementation practice





Exercises











Shared memory – Learn how to create a simple MPI subset capable of basic blocking point-to-point and collective operations over shared memory







Subset definition







General assumptions





Blocking point-to-point communication





Blocking collective operations





Exercises







Communication mechanisms







Basic communication





Intraprocess performance





Interprocess performance





Exercises







Startup and termination







Process creation







Two processes





More processes







Connection establishment





Process termination





Exercises







Blocking point-to-point communication







Limited message length







Blocking protocol







Unlimited message length







Double buffering





Eager protocol





Rendezvous protocol







Exercises







Blocking collective operations







Naive algorithms





Barrier





Broadcast





Reduce and Allreduce





Exercises











Sockets – Learn how to create an MPI subset capable of all point-to-point and blocking collective operations over Ethernet and other IP capable networks







Subset definition







General assumptions





Blocking point-to-point communication





Nonblocking point-to-point operations





Blocking collective operations





Exercises







Communication mechanisms







Basic communication





Intranode performance





Internode performance





Exercises







Synchronous progress engine







Communication establishment





Data transfer





Exercises







Startup and termination







Process creation







Startup command





Process daemon





Out-of-band communication





Host name resolution







Connection establishment







At startup (eager)





On request (lazy)







Process termination





Exercises







Blocking point-to-point communication







Source and tag matching





Unexpected messages





Exercises







Nonblocking point-to-point communication







Request management





Exercises







Blocking collective operations







Communication context





Basic algorithms







Tree based algorithms





Circular algorithms





Hypercube algorithms







Exercises











OFA libfabrics – Learn how to create an MPI subset capable of all point-to-point and collective operations over InfiniBand and upcoming future networks







Subset definition







General assumptions





Point-to-point operations





Collective operations





Exercises







Communication mechanisms







Basic communication





Intranode performance





Internode performance





Exercises







Startup and termination







Process creation





Credential exchange





Connection establishment





Process termination





Exercises







Point-to-point communication







Blocking communication





Nonblocking communication





Exercises







Collective operations







Advanced algorithms





Blocking operations





Nonblocking operations





Exercises











Advanced features – Learn how to add advanced MPI features including but not limited to heterogeneity, one-sided communication, file I/O, and language bindings







Communication modes







Standard





Buffered





Synchronous







Heterogeneity







Basic datatypes





Simple datatypes





Derived datatypes





Exercises







Groups, communicators, topologies







Group management





Communicator management





Process topologies





Exercises







One-sided communication







Mapped implementation





Native implementation





Exercises







File I/O







Standard I/O





MPI file I/O





Exercises







Language bindings







Fortran





C++





Java





Python





Exercises











Optimization – Learn how to optimize MPI internally by using advanced implementation techniques and available special hardware







Direct data transfer







Direct memory access





Remote direct memory access





Exercises







Threads







Thread support level





Threads as MPI processes





Shared memory extensions





Exercises







Multiple fabrics







Synchronous progress engine





Asynchronous progress engine





Hybrid progress engine





Exercises







Dedicated hardware







Synchronization





Special memory





Auxiliary networks





Exercises











Look ahead – Learn to recognize MPI advantages and drawbacks to better assess its future







MPI axioms







Reliable data transfer





Ordered message delivery





Dense process rank sequence





Exercises







MPI-4 en route







Fault tolerance





Exercises







Beyond MPI







Exascale challenge





Exercises











References – Learn about books that may further extend your knowledge






Appendices








MPI Families – Learn about major MPI implementation families, their genesis, architecture and relative performance







MPICH







Genesis





Architecture





Details







MPICH





MVAPICH





Intel MPI













Exercises







OpenMPI







Genesis





Architecture





Details





Exercises







Comparison







Market





Features





Performance





Exercises



Alternative interfaces – Learn about other popular interfaces that are used to implement MPI





DAPL













Exercises







SHMEM













Exercises







GasNET













Exercises







Portals













Exercises











Solutions to all exercises – Learn how to answer all those questions

Erscheinungsdatum
Zusatzinfo 35 Tables, black and white; 35 Illustrations, black and white
Verlagsort Boston
Sprache englisch
Maße 170 x 240 mm
Gewicht 716 g
Themenwelt Mathematik / Informatik Informatik Netzwerke
Mathematik / Informatik Informatik Software Entwicklung
Mathematik / Informatik Informatik Theorie / Studium
Sozialwissenschaften Kommunikation / Medien Buchhandel / Bibliothekswesen
Technik Architektur
Technik Elektrotechnik / Energietechnik
ISBN-10 1-5015-1554-3 / 1501515543
ISBN-13 978-1-5015-1554-5 / 9781501515545
Zustand Neuware
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich