Programming the Finite Element Method (eBook)
685 Seiten
John Wiley & Sons (Verlag)
978-1-118-53593-6 (ISBN)
Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite Element Method, learning how to develop computer programs to solve specific engineering problems using the finite element method.
This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary conditions, and interfaces to third party tools such as ParaView, METIS and ARPACK. As in the previous editions, a wide variety of problem solving capabilities are presented including structural analysis, elasticity and plasticity, construction processes in geomechanics, uncoupled and coupled steady and transient fluid flow and linear and nonlinear solid dynamics.
Key features:
- Updated to take into account advances in parallel computing as well as new material on thermal stress analysis
- Programs use an updated version of Fortran 2003
- Includes exercises for students
- Accompanied by website hosting software
Programming the Finite Element Method, Fifth Edition is an ideal textbook for undergraduate and postgraduate students in civil and mechanical engineering, applied mathematics and numerical analysis, and is also a comprehensive reference for researchers and practitioners.
Further information and source codes described in this text can be accessed at the following web sites:
- www.inside.mines.edu/-vgriffit/PFEM5 for the serial programs from Chapters 4-11
- www.parafem.org.uk for the parallel programs from Chapter 12
Ian M. Smith and Lee Margetts, University of Manchester, UK and D. V. Griffiths, Colorado School of Mines, USA
Professor Ian Smith is Professor Emeritus at the University of Manchester. He is a Fellow of Royal Academy of Engineering and has 200 published research papers. He has authored 5 books, the latest Boundary Element Method with Programming (Springer 2008).
D.V Griffiths is Professor of Civil Engineering, Division of Engineering at the Colorado School of Mines. His research focuses on oil resource geomechanics, probabilistic geotechnics, soil mechanics and foundation engineering, finite element software development.
Dr Lee Margetts is Head of Synthetic Environments at the University of Manchester Aerospace Research Institute. Dr Margetts' main areas of expertise are in structural mechanics, geotechnical engineering, high performance computing and tomographic imaging. His main research activities concern investigating how real materials behave (both organic and inorganic) and how this can be simulated using computers.
Ian M. Smith and Lee Margetts, University of Manchester, UK and D. V. Griffiths, Colorado School of Mines, USA Professor Ian Smith is Professor Emeritus at the University of Manchester. He is a Fellow of Royal Academy of Engineering and has 200 published research papers. He has authored 5 books, the latest Boundary Element Method with Programming (Springer 2008). D.V Griffiths is Professor of Civil Engineering, Division of Engineering at the Colorado School of Mines. His research focuses on oil resource geomechanics, probabilistic geotechnics, soil mechanics and foundation engineering, finite element software development. Dr Lee Margetts is Head of Synthetic Environments at the University of Manchester Aerospace Research Institute. Dr Margetts' main areas of expertise are in structural mechanics, geotechnical engineering, high performance computing and tomographic imaging. His main research activities concern investigating how real materials behave (both organic and inorganic) and how this can be simulated using computers.
Chapter 1
Preliminaries: Computer Strategies
1.1 Introduction
Many textbooks exist which describe the principles of the finite element method of analysis and the wide scope of its applications to the solution of practical engineering and scientific problems. Usually, little attention is devoted to the construction of the computer programs by which the numerical results are actually produced. It is presumed that readers have access to pre-written programs (perhaps to rather complicated ‘packages’) or can write their own. However, the gulf between understanding in principle what to do, and actually doing it, can still be large for those without years of experience in this field.
The present book bridges this gulf. Its intention is to help readers assemble their own computer programs to solve particular engineering and scientific problems by using a ‘building block’ strategy specifically designed for computations via the finite element technique. At the heart of what will be described is not a ‘program’ or a set of programs but rather a collection (library) of procedures or subroutines which perform certain functions analogous to the standard functions (SIN, SQRT, ABS, etc.) provided in permanent library form in all useful scientific computer languages. Because of the matrix structure of finite element formulations, most of the building block routines are concerned with manipulation of matrices.
The building blocks are then assembled in different patterns to make test programs for solving a variety of problems in engineering and science. The intention is that one of these test programs then serves as a platform from which new applications programs are developed by interested users.
The aim of the present book is to teach the reader to write intelligible programs and to use them. Both serial and parallel computing environments are addressed and the building block routines (numbering over 100) and all test programs (numbering over 70) have been verified on a wide range of computers. Efficiency is considered.
The chosen programming language is FORTRAN which remains, overwhelmingly, the most popular language for writing large engineering and scientific programs. Later in this chapter a brief description of the features of FORTRAN which influence the programming of the finite element method will be given. The most recent update of the language was in 2008 (ISO/IEC 1539-1:2010). For parallel environments, MPI has been used, although the programming strategy has also been tested with OpenMP, or a combination of the two.
1.2 Hardware
In principle, any computing machine capable of compiling and running FORTRAN programs can execute the finite element analyses described in this book. In practice, hardware will range from personal computers for more modest analyses and teaching purposes to ‘super’ computers, usually with parallel processing capabilities, for very large (especially non-linear 3D) analyses. For those who do not have access to the latter and occasionally wish to run large analyses, it is possible to gain access to such facilities on a pay-as-you-go basis through Cloud Computing (see Chapter 12). It is a powerful feature of the programming strategy proposed that the same software will run on all machine ranges. The special features of vector, multi-core, graphics and parallel processors are described later (see Sections 1.4 to 1.7).
1.3 Memory Management
In the programs in this book it will be assumed that sufficient main random access memory is available for the storage of data and the execution of programs. However, the arrays processed in finite element calculations might be of size, say, 1,000,000 by 10,000. Thus a computer would need to have a main memory of words (tens of Gigabytes) to hold this information, and while some such computers exist, they are comparatively rare. A more typical memory size is of the order of words (a Gigabyte).
One strategy to get round this problem is for the programmer to write ‘out-of-memory’ or ‘out-of-core’ routines which arrange for the processing of chunks of arrays in memory and the transfer of the appropriate chunks to and from back-up storage.
Alternatively, store management is removed from the user's control and given to the system hardware and software. The programmer sees only a single level of virtual memory of very large capacity and information is moved from secondary memory to main memory and out again by the supervisor or executive program which schedules the flow of work through the machine. It is necessary for the system to be able to translate the virtual address of variables into a real address in memory. This translation usually involves a complicated bit-pattern matching called ‘paging’. The virtual store is split into segments or pages of fixed or variable size referenced by page tables, and the supervisor program tries to ‘learn’ from the way in which the user accesses data in order to manage the store in a predictive way. However, memory management can never be totally removed from the user's control. It must always be assumed that the programmer is acting in a reasonably logical manner, accessing array elements in sequence (by rows or columns as organised by the compiler and the language). If the user accesses a virtual memory of words in a random fashion, the paging requests will ensure that very little execution of the program can take place (see, e.g., Willé, [1995).
In the immediate future, ‘large’ finite element analyses, say involving more than 10 million unknowns, are likely to be processed by the vector and parallel processing hardware described in the next sections. When using such hardware there is usually a considerable time penalty if the programmer interrupts the flow of the computation to perform out-of-memory transfers or if automatic paging occurs. Therefore, in Chapter 3 of this book, special strategies are described whereby large analyses can still be processed ‘in-memory’. However, as problem sizes increase, there is always the risk that main memory, or fast subsidiary memory (‘cache’), will be exceeded with consequent deterioration of performance on most machine architectures.
1.4 Vector Processors
Early digital computers performed calculations ‘serially’, that is, if a thousand operations were to be carried out, the second could not be initiated until the first had been completed and so on. When operations are being carried out on arrays of numbers, however, it is perfectly possible to imagine that computations in which the result of an operation on two array elements has no effect on an operation on another two array elements, can be carried out simultaneously. The hardware feature by means of which this is realised in a computer is called a ‘pipeline’ and in general all modern computers use this feature to a greater or lesser degree. Computers which consist of specialised hardware for pipelining are called ‘vector’ computers. The ‘pipelines’ are of limited length and so for operations to be carried out simultaneously it must be arranged that the relevant operands are actually in the pipeline at the right time. Furthermore, the condition that one operation does not depend on another must be respected. These two requirements (amongst others) mean that some care must be taken in writing programs so that best use is made of the vector processing capacity of many machines. It is, moreover, an interesting side-effect that programs well structured for vector machines will tend to run better on any machine because information tends to be in the right place at the right time (in a special cache memory, for example).
True vector hardware tends to be expensive and, at the time of writing, a much more common way of increasing processing speed is to execute programs in parallel on many processors. The motivation here is that the individual processors are then ‘standard’ and therefore cheap. However, for really intensive computations, it is likely that an amalgamation of vector and parallel hardware is ideal.
1.5 Multi-core Processors
Personal computers from the 1980s onwards originally had one processor with a single central processing unit. Every 18 months or so, manufacturers were able to double the number of transistors on the processor and increase the number of operations that could be performed each second (the clock speed). By the 2000s, miniaturisation of the circuits reached a physical limit in terms of what could be reliably manufactured. Another problem was that it was becoming increasingly difficult to keep these processors cool and energy efficient. These design issues were side-stepped with the development of multi-core processors. Instead of increasing transistor counts and clock speeds, manufacturers began to integrate two or more independent central processing units (cores) onto the same single silicon die or multiple dies in a single chip package. Multi-core processors have gradually replaced single-core processors on all computers over the past 10 years.
The performance gains of multi-core processing depend on the ability of the application to use more than one core at the same time. The programmer needs to write software to execute in parallel, and this is covered later. These modern so-called ‘scalar’ computers also tend to contain some vector-type hardware. The latest Intel processor has 256-bit vector units on each core, enough to compute four 64-bit floating point operations at the same...
| Erscheint lt. Verlag | 28.8.2013 |
|---|---|
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Informatik ► Theorie / Studium |
| Informatik ► Weitere Themen ► CAD-Programme | |
| Mathematik / Informatik ► Mathematik ► Angewandte Mathematik | |
| Technik ► Architektur | |
| Technik ► Maschinenbau | |
| Schlagworte | available • Computer • Develop • Edition • Editions • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • Element • Engineering • Fifth • Finite • Finite-Element-Methode • FORTRAN • libraries • Maschinenbau • Maschinenbau - Entwurf • mechanical engineering • Mechanical Engineering - Design • Method • New • Numerical Methods & Algorithms • Numerische Methoden u. Algorithmen • Offers • Online • Practical • previous • Problems • programmingoriented • Programs • specific • Style • timely revisions |
| ISBN-10 | 1-118-53593-6 / 1118535936 |
| ISBN-13 | 978-1-118-53593-6 / 9781118535936 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich