Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Electronic Structure Calculations on Graphics Processing Units (eBook)

From Quantum Chemistry to Condensed Matter Physics
eBook Download: EPUB
2016
John Wiley & Sons (Verlag)
978-1-118-67069-9 (ISBN)

Lese- und Medienproben

Electronic Structure Calculations on Graphics Processing Units -
Systemvoraussetzungen
127,99 inkl. MwSt
(CHF 124,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics provides an overview of computing on graphics processing units (GPUs), a brief introduction to GPU programming, and the latest examples of code developments and applications for the most widely used electronic structure methods.

The book covers all commonly used basis sets including localized Gaussian and Slater type basis functions, plane waves, wavelets and real-space grid-based approaches.
The chapters expose details on the calculation of two-electron integrals, exchange-correlation quadrature, Fock matrix formation, solution of the self-consistent field equations, calculation of nuclear gradients to obtain forces, and methods to treat excited states within DFT. Other chapters focus on semiempirical and correlated wave function methods including density fitted second order Møller-Plesset perturbation theory and both iterative and perturbative single- and multireference coupled cluster methods.

Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics presents an accessible overview of the field for graduate students and senior researchers of theoretical and computational chemistry, condensed matter physics and materials science, as well as software developers looking for an entry point into the realm of GPU and hybrid GPU/CPU programming for electronic structure calculations.


Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics provides an overview of computing on graphics processing units (GPUs), a brief introduction to GPU programming, and the latest examples of code developments and applications for the most widely used electronic structure methods. The book covers all commonly used basis sets including localized Gaussian and Slater type basis functions, plane waves, wavelets and real-space grid-based approaches. The chapters expose details on the calculation of two-electron integrals, exchange-correlation quadrature, Fock matrix formation, solution of the self-consistent field equations, calculation of nuclear gradients to obtain forces, and methods to treat excited states within DFT. Other chapters focus on semiempirical and correlated wave function methods including density fitted second order M ller-Plesset perturbation theory and both iterative and perturbative single- and multireference coupled cluster methods. Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics presents an accessible overview of the field for graduate students and senior researchers of theoretical and computational chemistry, condensed matter physics and materials science, as well as software developers looking for an entry point into the realm of GPU and hybrid GPU/CPU programming for electronic structure calculations.

Ross C. Walker, San Diego Supercomputer Center and Department of Chemistry and Biochemistry, University of California San Diego Dr. Walker is an Assistant Research Professor at the San Diego Supercomputer Center, an Adjunct Assistant Professor in the Department of Chemistry and Biochemistry at the University of California San Diego, and an NVIDIA CUDA fellow. He leads a team of scientists that develop advanced techniques for molecular dynamics (MD) simulations aimed at improving drug and biocatalyst design. Aspects of his work that are of particular relevance for the proposed book include the development of quantum mechanics (QM) and quantum mechanics/molecular mechanics (QM/MM) methods for MD simulations, and the development of a widely used GPU accelerated MD code with funding from the National Science Foundation program SI2 (Software Infrastructure for Sustained Innovation). These methods, including the GPU accelerated MD code, are integrated into the AMBER MD software package that is used worldwide. Over the course of the last years Dr. Walker has given presentations and lectured on multiple occasions about GPU acceleration of MD codes and scientific applications. Dr. Walker's research is documented in over 30 peer-reviewed journal articles and multiple collected works. In 2010 Dr. Walker co-authored with Dr. Goetz a book chapter that reviews the use of GPU accelerators in quantum chemistry. Andreas W. Goetz, San Diego Supercomputer Center, University of California San Diego Dr. Goetz is an Assistant Project Scientist at the San Diego Supercomputer Center with strong expertise in method and scientific software development for quantum chemistry and molecular dynamics simulations on high performance computing platforms. He is a contributing author of the ADF (Amsterdam Density Functional) software for DFT calculations and the AMBER MD software package. Over the last years, Dr. Goetz has given various contributed and invited presentations of his work at renowned universities and international conferences. Dr. Goetz has also organized and taught workshops demonstrating the use of the software he develops. His research is documented in 21 peer-reviewed journal articles and 1 book contribution.

Chapter 1
Why Graphics Processing Units


Perri Needham1, Andreas W. Götz2 and Ross C. Walker1,2

1San Diego Supercomputer Center, UCSD, La Jolla, CA, USA

2Department of Chemistry and Biochemistry, UCSD, La Jolla, CA, USA

1.1 A Historical Perspective of Parallel Computing


The first general-purpose electronic computers capable of storing instructions came into existence in 1950. That is not to say, however, that the use of computers to solve electronic structure problems had not already been considered, or realized. From as early as 1930 scientists used a less advanced form of computation to solve their quantum mechanical problems, albeit a group of assistants simultaneously working on mechanical calculators but an early parallel computing machine nonetheless [1]. It was clear from the beginning that solutions to electronic structure problems could not be carried forward to many-electron systems without the use of some computational device to lessen the mathematical burden. Today's computational scientists rely heavily on the use of parallel electronic computers.

Parallel electronic computers can be broadly classified as having either multiple processing elements in the same machine (shared memory) or multiple machines coupled together to form a cluster/grid of processing elements (distributed memory). These arrangements make it possible to perform calculations concurrently across multiple processing elements, enabling large problems to be broken down into smaller parts that can be solved simultaneously (in parallel).

The first electronic computers were primarily designed for and funded by military projects to assist in World War II and the start of the Cold War [2]. The first working programmable digital computer, Konrad Zuse's Z3 [3], was an electromechanical device that became operational in 1941 and was used by the German aeronautical research organization. Colossus, developed by the British for cryptanalysis during World War II, was the world's first programmable electronic digital computer and was responsible for the decryption of valuable German military intelligence from 1944 onwards. Colossus was a purpose-built machine to determine the encryption settings for the German Lorenz cipher and read encrypted messages and instructions from paper tape. It was not until 1955, however, that the first general-purpose machine to execute floating-point arithmetic operations became commercially available, the IBM 704 (see Figure 1.1).

Figure 1.1 Photograph taken in 1957 at NASA featuring an IBM 704 computer, the first commercially available general-purpose computer with floating-point arithmetic hardware [4]

A common measure of compute performance is floating point operations per second (FLOPS). The IBM 704 was capable of a mere 12,000 floating-point additions per second and required 1500–2000 ft2 of floor space. Compare this to modern smartphones, which are capable of around 1.5 GIGA FLOPS [5] thanks to the invention in 1958 and a subsequent six decades of refinement of the integrated circuit. To put this in perspective, if the floor footprint of an IBM 704 was instead covered with modern-day smartphones laid side by side, the computational capacity of the floor space would grow from 12,000 to around 20,000,000,000,000 FLOPS. This is the equivalent of every person on the planet carrying out roughly 2800 floating point additions per second. Statistics like these make it exceptionally clear just how far computer technology has advanced, and, while mobile internet and games might seem like the apex of the technology's capabilities, it has also opened doorways to computationally explore scientific questions in ways previously believed impossible.

Computers today find their use in many different areas of science and industry, from weather forecasting and film making to genetic research, drug discovery, and nuclear weapon design. Without computers many scientific exploits would not be possible.

While the performance of individual computers continued to advance the thirst for computational power for scientific simulation was such that by the late 1950s discussions had turned to utilizing multiple processors, working in harmony, to address more complex scientific problems. The 1960s saw the birth of parallel computing with the invention of multiprocessor systems. The first recorded example of a commercially available multiprocessor (parallel) computer was Burroughs Corporation's D825, released in 1962, which had four processors that accessed up to 16 memory modules via a cross switch (see Figure 1.2).

Figure 1.2 Photograph of Burroughs Corporation's D825 parallel computer [6]

This was followed in the 1970s by the concept of single-instruction multiple-data (SIMD) processor architectures, forming the basis of vector parallel computing. SIMD is an important concept in graphics processing unit (GPU) computing and is discussed in the next chapter.

Parallel computing opened the door to tackling complex scientific problems including modeling electrons in molecular systems through quantum mechanical means (the subject of this book). To give an example, optimizing the geometry of any but the smallest molecular systems using sophisticated electronic structure methods can take days (if not weeks) on a single processor element (compute core). Parallelizing the calculation over multiple compute cores can significantly cut down the required computing time and thus enables a researcher to study complex molecular systems in more practical time frames, achieving insights otherwise thought inaccessible. The use of parallel electronic computers in quantum chemistry was pioneered in the early 1980s by the Italian chemist Enrico Clementi and co-workers [7]. The parallel computer consisted of 10 compute nodes, loosely coupled into an array, which was used to calculate the Hartree–Fock (HF) self-consistent field (SCF) energy of a small fragment of DNA represented by 315 basis functions. At the time this was a considerable achievement. However, this was just the start, and by the late 1980s all sorts of parallel programs had been developed for quantum chemistry methods. These included HF methods to calculate the energy and nuclear gradients of a molecular system [8–11], the transformation of two-electron integrals [8, 9, 12], the second-order Møller–Plesset perturbation theory [9, 13], and the configuration interaction method [8]. The development of parallel computing in quantum chemistry was dictated by developments in available technologies. In particular, the advent of application programming interfaces (APIs) such as the message-passing interface (MPI) library [14] made parallel computing much more accessible to quantum chemists, along with developments in hardware technology driving down the cost of parallel computing machines [10].

While finding widespread use in scientific computing until recently, parallel computing was reserved for those with access to high-performance computing (HPC) resources. However, for reasons discussed in the following, all modern computer architectures exploit parallel technology, and effective parallel programming is vital to be able to utilize the computational power of modern devices. Parallel processing is now standard across all devices fitted with modern-day processor architectures. In his 1965 paper [15], Gordon E. Moore first observed that the number of transistors (in principle, directly related to performance) on integrated circuits was doubling every 2 years (see Figure 1.3).

Figure 1.3 Microprocessor transistor counts 1971–2011. Until recently, the number of transistors on integrated circuits has been following Moore's Law [16], doubling approximately every 2 years

Since this observation was announced, the semiconductor industry has preserved this trend by ensuring that chip performance doubles every 18 months through improved transistor efficiency and/or quantity. In order to meet these performance goals the semiconductor industry has now improved chip design close to the limits of what is physically possible. The laws of physics dictate the minimum size of a transistor, the rate of heat dissipation, and the speed of light.

“The size of transistors is approaching the size of atoms, which is a fundamental barrier” [17].

At the same time, the clock frequencies cannot be easily increased since both clock frequency and transistor density increase the power density, as illustrated by Figure 1.4. Processors are already operating at a power density that exceeds that of a hot plate and are approaching that of the core of a nuclear reactor.

Figure 1.4 Illustration of the ever-increasing power density within silicon chips, with decreasing gate length.

Courtesy Intel Corporation [18]

In order to continue scaling with Moore's Law, but keep power densities manageable, chip manufacturers have taken to increasing the number of cores per processor as opposed to transistors per core. Most processors produced today comprise multiple cores and so are parallel processing machines by definition. In terms of processor performance, this is a tremendous boon to science and industry; however, the increasing number of cores brings with them increased complexity to the programmer in order to fully utilize the available compute power. It is becoming more and more difficult for applications to achieve good scaling with increasing core counts, and hence...

Erscheint lt. Verlag 16.2.2016
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Software Entwicklung
Naturwissenschaften Chemie Physikalische Chemie
Technik Nachrichtentechnik
Schlagworte Chemie • Chemistry • Computational Chemistry & Molecular Modeling • Computational Chemistry u. Molecular Modeling • Coupled Cluster • cuda • density functional theory • Electron Correlation • electronic structure • Festkörperphysik • Festkörperphysik • Graphics Processing Unit • Graphikprozessor • Kondensierte Materie • OpenCL • Parallel Computing • performance optimization • Physics • Physik • Quantenchemie • Quantum Chemistry • Solid state physics
ISBN-10 1-118-67069-8 / 1118670698
ISBN-13 978-1-118-67069-9 / 9781118670699
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Codes in Modellen auf Basis von Java und UML

von Eric Aristhide Nyamsi

eBook Download (2025)
Springer Vieweg (Verlag)
CHF 78,15