Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Iterative Solution of Large Sparse Systems of Equations (eBook)

eBook Download: PDF
2016 | 2. Auflage
XXIII, 528 Seiten
Springer-Verlag
978-3-319-28483-5 (ISBN)

Lese- und Medienproben

Iterative Solution of Large Sparse Systems of Equations -  Wolfgang Hackbusch
Systemvoraussetzungen
67,40 inkl. MwSt
(CHF 65,85)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
In the second edition of this classic monograph, complete with four new chapters and updated references, readers will now have access to content describing and analysing classical and modern methods with emphasis on the algebraic structure of linear iteration, which is usually ignored in other literature.

The necessary amount of work increases dramatically with the size of systems, so one has to search for algorithms that most efficiently and accurately solve systems of, e.g., several million equations. The choice of algorithms depends on the special properties the matrices in practice have. An important class of large systems arises from the discretization of partial differential equations. In this case, the matrices are sparse (i.e., they contain mostly zeroes) and well-suited to iterative algorithms.

The first edition of this book grew out of a series of lectures given by the author at the Christian-Albrecht University of Kiel to students of mathematics. The second edition includes quite novel approaches.



Wolfgang Hackbusch is a Professor in the Scientific Computing department at Max Planck Institute for Mathematics in the Sciences. His research areas include numerical treatment of partial differential equations, numerical treatment of integral equations, and hierarchical matrices.

Wolfgang Hackbusch is a Professor in the Scientific Computing department at Max Planck Institute for Mathematics in the Sciences. His research areas include numerical treatment of partial differential equations, numerical treatment of integral equations, and hierarchical matrices.

Preface 5
Contents 7
List of Symbols and Abbreviations 18
Symbols 18
Greek Letters 19
Latin Letters 19
Abbreviations and Algorithms 22
Part ILinear Iterations 23
1Introduction 25
1.1 Historical Remarks Concerning Iterative Methods 25
1.2 Model Problem: Poisson Equation 26
1.3 Notation 29
1.3.1 Index Sets, Vectors, and Matrices 29
1.3.2 Star Notation 31
1.4 A Single System Versus a Family of Systems 32
1.5 Amount of Work for the Direct Solution of a Linear System 32
1.6 Examples of Iterative Methods 34
1.7 Sparse Matrices Versus Fully Populated Matrices 37
2Iterative Methods 39
2.1 Consistency and Convergence 39
2.1.1 Notation 39
2.1.2 Fixed Points 40
2.1.3 Consistency 41
2.1.4 Convergence 41
2.1.5 Convergence and Consistency 42
2.1.6 Defect Correction as an Example of an Inconsistent Iteration 42
2.2 Linear Iterative Methods 43
2.2.1 Notation, First Normal Form 43
2.2.2 Consistency and Second Normal Form 44
2.2.3 Third Normal Form 45
2.2.4 Representation of the Iterates xm 45
2.2.5 Convergence 46
2.2.6 Convergence Speed 48
2.2.7 Remarks Concerning the MatricesM, N, and W 50
2.2.8 Three-Term Recursions, Two- and Multi-Step Iterations 51
2.3 Efficacy of Iterative Methods 52
2.3.1 Amount of Computational Work 52
2.3.2 Efficacy 53
2.3.3 Order of Linear Convergence 54
2.4 Test of Iterative Methods 54
2.4.1 Consistency Test 54
2.4.2 Convergence Test 55
2.4.3 Test by the Model Problem 56
2.4.4 Stopping Criterion 56
3Classical Linear Iterations in the Positive Definite Case 57
3.1 Eigenvalue Analysis of the Model Problem 57
3.2 Traditional Linear Iterations 59
3.2.1 Richardson Iteration 59
3.2.2 Jacobi Iteration 60
3.2.3 Gauss–Seidel Iteration 61
3.2.4 SOR Iteration 63
3.3 Block Versions 64
3.3.1 Block Structure 64
3.3.2 Block-Jacobi Iteration 65
3.3.3 Block-Gauss–Seidel Iteration 66
3.3.4 Block-SOR Iteration 67
3.4 ComputationalWork of the Iterations 67
3.4.1 Case of General Sparse Matrices 67
3.4.2 Amount of Work in the Model Case 68
3.5 Convergence Analysis 69
3.5.1 Richardson Iteration 69
3.5.2 Convergence Criterion for Positive Definite Iterations 76
3.5.3 Jacobi Iteration 77
3.5.4 Gauss–Seidel and SOR Iterations 78
3.5.5 Convergence of the Block Variants 84
3.6 Convergence Rates in the Case of the Model Problem 84
3.6.1 Richardson and Jacobi Iteration 84
3.6.2 Block-Jacobi Iteration 85
3.6.3 Numerical Examples for the Jacobi Variants 87
3.6.4 SOR and Block-SOR Iteration with Numerical Examples 88
4Analysis of Classical Iterations Under Special Structural Conditions 90
4.1 2-Cyclic Matrices 90
4.2 Preparatory Lemmata 93
4.3 Analysis of the Richardson Iteration 95
4.4 Analysis of the Jacobi Iteration 97
4.5 Analysis of the Gauss–Seidel Iteration 98
4.6 Analysis of the SOR Iteration 99
4.6.1 Consistently Ordered Matrices 99
4.6.2 Theorem of Young 102
4.6.3 Order Improvement by SOR 105
4.6.4 Practical Handling of the SOR Method 106
4.6.5 p-Cyclic Matrices 106
4.7 Application to the Model Problem 107
4.7.1 Analysis in the Model Case 107
4.7.2 Gauss–Seidel Iteration: Numerical Examples 108
4.7.3 SOR Iteration: Numerical Examples 109
5Algebra of Linear Iterations 110
5.1 Adjoint, Symmetric, and Positive Definite Iterations 111
5.1.1 Adjoint Iteration 111
5.1.2 Symmetric Iterations 113
5.1.3 Positive Definite Iterations 114
5.1.4 Positive Spectrum of NA 116
5.2 Damping of Linear Iterations 116
5.2.1 Definition 116
5.2.2 Damped Jacobi Iteration 117
5.2.3 Accelerated SOR 118
5.3 Addition of Linear Iterations 118
5.4 Product Iterations 120
5.4.1 Definition and Properties 120
5.4.2 Constructing Symmetric Iterations 122
5.4.3 Symmetric Gauss–Seidel and SSOR 124
5.5 Combination with Secondary Iterations 124
5.5.1 First Example for Secondary Iterations 125
5.5.2 Second Example for Secondary Iterations 126
5.5.3 Convergence Analysis in the General Case 127
5.5.4 Analysis in the Positive Definite Case 129
5.5.5 Estimate of the Amount of Work 131
5.5.6 Numerical Examples 132
5.6 Transformations 133
5.6.1 Left Transformation 133
5.6.2 Right Transformation 136
5.6.3 Kaczmarz Iteration 137
5.6.4 Cimmoni Iteration 139
5.6.5 Two-Sided Transformation 140
5.6.6 Similarity Transformation 143
6Analysis of Positive Definite Iterations 144
6.1 Different Cases of Positivity 144
6.2 Convergence Analysis 146
6.2.1 Case 1: Positive Spectrum 146
6.2.2 Case 2: Positive Definite 147
6.2.3 Case 3: Positive Definite Iteration 148
6.2.4 Case 4: Positive DefiniteW+WH or N+NH 149
6.2.5 Case 5: Symmetrised Iteration ?sym 150
6.2.6 Case 6: Perturbed Positive Definite Case 152
6.3 Symmetric Gauss–Seidel Iteration and SSOR 153
6.3.1 The CaseA > 0
6.3.2 SSOR in the 2-Cyclic Case 155
6.3.3 Modified SOR 156
6.3.4 Unsymmetric SOR Method 157
6.3.5 Numerical Results for the SSOR Iteration 157
7Generation of Iterations 158
7.1 Product Iterations 158
7.2 Additive Splitting Technique 160
7.2.1 Definition and Examples 160
7.2.2 Regular Splittings 162
7.2.3 Applications 165
7.2.4 P-Regular Splitting 168
7.3 Incomplete Triangular Decompositions 169
7.3.1 Introduction and ILU Iteration 169
7.3.2 Incomplete Decomposition with Respect to a Star Pattern 172
7.3.3 Application to General Five-Point Formulae 173
7.3.4 Modified ILU Decompositions 175
7.3.5 Existence and Stability of the ILU Decomposition 175
7.3.6 Properties of the ILU Decomposition 180
7.3.7 ILU Decompositions Corresponding to Other Patterns 182
7.3.8 Approximative ILU Decompositions 183
7.3.9 Blockwise ILU Decomposition 184
7.3.10 Numerical Examples 184
7.3.11 Remarks 185
7.4 Preconditioning 186
7.4.1 Idea of Preconditioning 186
7.4.2 Examples 187
7.4.3 Preconditioning in the Wider Sense 188
7.4.4 Rules for Condition Numbers and Spectral Equivalence 188
7.4.5 Equivalent Bilinear Forms 191
7.5 Time-Stepping Methods 192
7.6 Nested Iteration 193
Part II Semi-Iterations and Krylov Methods 194
8Semi-Iterative Methods 196
8.1 First Formulation 196
8.1.1 Notation 196
8.1.2 Consistency and Asymptotic Convergence Rate 197
8.1.3 Error Representation 198
8.1.4 Krylov Space 200
8.2 Second Formulation of a Semi-Iterative Method 202
8.2.1 General Representation 202
8.2.2 Three-Term Recursion 204
8.3 Optimal Polynomials 205
8.3.1 Minimisation Problem 205
8.3.2 Discussion of the Second Minimisation Problem 206
8.3.3 Chebyshev Polynomials 208
8.3.4 Chebyshev Method (Solution of the Third Minimisation Problem) 208
8.3.5 Order Improvement by the Chebyshev Method 213
8.3.6 Optimisation Over Other Sets 214
8.3.7 Cyclic Iteration 215
8.3.8 Two- and Multi-Step Iterations 216
8.3.9 Amount of Work of the Semi-Iterative Method 216
8.4 Application to Iterations Discussed Above 217
8.4.1 Preliminaries 217
8.4.2 Semi-Iterative Richardson Method 218
8.4.3 Semi-Iterative Jacobi and Block-Jacobi Method 219
8.4.4 Semi-Iterative SSOR and Block-SSOR Iteration 219
8.5 Method of Alternating Directions (ADI) 222
8.5.1 Application to the Model Problem 222
8.5.2 General Representation 224
8.5.3 ADI in the Commutative Case 226
8.5.4 ADI Method and Semi-Iterative Methods 229
8.5.5 Amount of Work and Numerical Examples 230
9Gradient Method 231
9.1 Reformulation as Minimisation Problem 231
9.1.1 Minimisation Problem 231
9.1.2 Search Directions 232
9.1.3 Other Quadratic Functionals 233
9.1.4 Complex Case 234
9.2 Gradient Method 235
9.2.1 Construction 235
9.2.2 Properties of the Gradient Method 236
9.2.3 Numerical Examples 238
9.2.4 Gradient Method Based on Other Basic Iterations 239
9.2.5 Numerical Examples 243
9.3 Method of the Conjugate Directions 244
9.3.1 Optimality with Respect to a Direction 244
9.3.2 Conjugate Directions 245
9.4 Minimal Residual Iteration 248
10Conjugate Gradient Methods and Generalisations 249
10.1 Preparatory Considerations 249
10.1.1 Characterisation by Orthogonality 249
10.1.2 Solvability 251
10.1.3 Galerkin and Petrov–Galerkin Methods 251
10.1.4 Minimisation 252
10.1.5 Error Statements 252
10.2 Conjugate Gradient Method 254
10.2.1 First Formulation 254
10.2.2 CG Method (Applied to Richardson’s Iteration) 257
10.2.3 Convergence Analysis 258
10.2.4 CG Method Applied to Positive Definite Iterations 261
10.2.5 Numerical Examples 264
10.2.6 Amount of Work of the CG Method 265
10.2.7 Suitability for Secondary Iterations 266
10.2.8 Three-Term Recursion for pm 267
10.3 Method of Conjugate Residuals (CR) 270
10.3.1 Algorithm 270
10.3.2 Application to Hermitian Matrices 271
10.3.3 Stabilised Method of Conjugate Residuals 272
10.3.4 Convergence Results for Indefinite Matrices 273
10.3.5 Numerical Examples 275
10.4 Method of Orthogonal Directions 276
10.5 Solution of Nonsymmetric Systems 278
10.5.1 Generalised Minimal Residual Method (GMRES) 278
10.5.2 Full Orthogonalisation Method (FOM) 281
10.5.3 Biconjugate Gradient Method and Variants 282
10.5.4 Further Remarks 282
Part III Special Iterations 283
11Multigrid Iterations 285
11.1 Introduction 286
11.1.1 Smoothing 286
11.1.2 Hierarchy of Systems of Equations 288
11.1.3 Prolongation 289
11.1.4 Restriction 291
11.1.5 Coarse-Grid Correction 292
11.2 Two-Grid Method 294
11.2.1 Algorithm 294
11.2.2 Modifications 294
11.2.3 Iteration Matrix 294
11.2.4 Numerical Examples 295
11.3 Analysis for a One-Dimensional Example 296
11.3.1 Fourier Analysis 296
11.3.2 Transformed Quantities 298
11.3.3 Convergence Results 299
11.4 Multigrid Iteration 301
11.4.1 Algorithm 301
11.4.2 Numerical Examples 302
11.4.3 Computational Work 304
11.4.4 Iteration Matrix 306
11.5 Nested Iteration 307
11.5.1 Discretisation Error and Relative Discretisation Error 307
11.5.2 Algorithm 308
11.5.3 Error Analysis 308
11.5.4 Application to Optimal Iterations 310
11.5.5 Amount of Computational Work 311
11.5.6 Numerical Examples 311
11.5.7 Comments 312
11.6 Convergence Analysis 313
11.6.1 Summary 313
11.6.2 Smoothing Property 313
11.6.3 Approximation Property 318
11.6.4 Convergence of the Two-Grid Iteration 321
11.6.5 Convergence of the Multigrid Iteration 321
11.6.6 Case of Weaker Regularity 323
11.7 Symmetric Multigrid Methods 324
11.7.1 Symmetric and Positive Definite Multigrid Algorithms 324
11.7.2 Two-Grid Convergence for ?1 > 0 , ?2 >
11.7.3 Smoothing Property in the Symmetric Case 327
11.7.4 Strengthened Two-Grid Convergence Estimates 328
11.7.5 V-Cycle Convergence 330
11.7.6 Unsymmetric Multigrid Convergence for all? > 0
11.8 Combination of Multigrid Methods with Semi-Iterations 333
11.8.1 Semi-Iterative Smoothers 333
11.8.2 Damped Coarse-Grid Corrections 335
11.8.3 Multigrid as Basic Iteration of the CG Method 335
11.9 Further Comments 336
11.9.1 Multigrid Method of the Second Kind 336
11.9.2 Robust Methods 337
11.9.3 History of the Multigrid Method 337
11.9.4 Frequency Filtering Decompositions 338
11.9.5 Nonlinear Systems 340
12Domain Decomposition and Subspace Methods 345
12.1 Introduction 345
12.2 Overlapping Subdomains 347
12.2.1 Introductory Example 347
12.2.2 Many Subdomains 349
12.3 Nonoverlapping Subdomains 349
12.3.1 Dirichlet–Neumann Method 349
12.3.2 Lagrange Multiplier Based Methods 350
12.4 Schur Complement Method 352
12.4.1 Nonoverlapping Domain Decomposition with Interior Boundary 352
12.4.2 Direct Solution 352
12.4.3 Preconditioners of the Schur Complement 354
12.4.4 Multigrid-like Domain Decomposition Methods 355
12.5 Subspace Iteration 356
12.5.1 General Construction 356
12.5.2 The Prolongations 357
12.5.3 Multiplicative and Additive Schwarz Iterations 358
12.5.4 Interpretation as Gauss–Seidel and Jacobi Iteration 359
12.5.5 Classical Schwarz Iteration 360
12.5.6 Approximate Solution of the Subproblems 360
12.5.7 Strengthened Estimate 362
12.6 Properties of the Additive Schwarz Iteration 364
12.6.1 Parallelism 364
12.6.2 Condition Estimates 364
12.6.3 Convergence Statements 367
12.7 Analysis of the Multiplicative Schwarz Iteration 369
12.7.1 Convergence Statements 369
12.7.2 Proofs of the Convergence Theorems 372
12.8 Examples 377
12.8.1 Schwarz Method With Proper Domain Decomposition 377
12.8.2 Additive Schwarz Iteration with Coarse-Grid Correction 378
12.8.3 Formulation in the Case of Galerkin Discretisation 378
12.9 Multigrid Iterations as Subspace Decomposition Method 379
12.9.1 Braess’ Analysis without Regularity 380
12.9.2 V-Cycle Interpreted as Multiplicative Schwarz Iteration 382
12.9.3 Proof of V-Cycle Convergence 384
12.9.4 Hierarchical Basis Method 386
12.9.5 Multilevel Schwarz Iteration 389
12.9.6 Further Approaches 389
13 H-LU Iteration 391
13.1 Approximate LU Decomposition 391
13.1.1 Triangular Matrices 392
13.1.2 Solution of LUx = b 392
13.1.3 Matrix-Valued Solutions of LX = Z and XU = Z 393
13.1.4 Generation of the LU Decomposition 395
13.1.5 Cost of the H-LU Decomposition 396
13.2 H-LU Decomposition for Sparse Matrices 396
13.2.1 Finite Element Matrices 396
13.2.2 Separability of the Matrix 397
13.2.3 Construction of the Cluster Tree 398
13.2.4 Application to Inversion 400
13.2.5 Admissibility Condition 401
13.2.6 LU Decomposition 401
13.3 UL Decomposition of the Inverse Matrix 401
13.4 H-LU Iteration 402
13.4.1 General Construction 402
13.4.2 Algebraic LU Decomposition 404
13.5 Further Applications of Hierarchical Matrices 404
14Tensor-based Methods 405
14.1 Tensors 405
14.1.1 Introductory Example: Lyapunov Equation 405
14.1.2 Nature of the Underlying Problems 406
14.1.3 Definition of Tensor Spaces 407
14.1.4 Case of Grid Functions 408
14.1.5 Kronecker Products of Matrices 409
14.1.6 Functions on Cartesian Products 409
14.2 Sparse Tensor Representation 410
14.2.1 r-Term Format (Canonical Format) 410
14.2.2 A Particular Example 411
14.2.3 Subspace Format (Tucker Format) 414
14.2.4 Hierarchical Tensor Format 415
14.3 Linear Systems 416
14.3.1 Poisson Model Problem 416
14.3.2 A Parametrised Problem 416
14.3.3 Solution of Linear Systems 418
14.3.4 CG-Type Methods 418
14.3.5 Multigrid Approach 418
14.3.6 Convergence 419
14.3.7 Parabolic Problems 419
14.4 Variational Approach 420
Appendix AFacts from Linear Algebra 421
A.1 Notation for Vectors and Matrices 421
A.2 Systems of Linear Equations 422
A.3 Eigenvalues and Eigenvectors 423
A.4 Block Vectors and Block Matrices 427
A.5 Orthogonality 429
A.5.1 Elementary Definitions 429
A.5.2 Orthogonal and Unitary Matrices 430
A.5.3 Sums of Subspaces and Orthogonal Complements 430
A.6 Normal Forms 431
A.6.1 Schur Normal Form 431
A.6.2 Jordan Normal Form 432
A.6.3 Diagonalisability 434
A.6.4 Singular Value Decomposition 436
Appendix BFacts from Normed Spaces 437
B.1 Norms 437
B.1.1 Vector Norms 437
B.1.2 Equivalence of All Norms 438
B.1.3 Corresponding Matrix Norms 439
B.1.4 Condition and Spectral Condition Number 441
B.2 Hilbert Norm 442
B.2.1 Elementary Properties 442
B.2.2 Spectral Norm 442
B.3 Correlation Between Norms and Spectral Radius 444
B.3.1 Spectral Norm and Spectral Radius 444
B.3.2 Matrix Norms Approximating the Spectral Radius 445
B.3.3 Geometrical Sum of Matrices 446
B.3.4 Numerical Radius of a Matrix 447
Appendix CFacts from Matrix Theory 451
C.1 Positive Definite Matrices 451
C.1.1 Definition and Notation 451
C.1.2 Rules and Criteria for Positive Definite Matrices 452
C.1.3 Remarks Concerning Positive Definite Matrices 453
C.2 Graph of a Matrix and Irreducible Matrices 455
C.3 Positive Matrices 458
C.3.1 Definition and Notation 458
C.3.2 Perron–Frobenius Theory of Positive Matrices 460
C.3.3 Diagonal Dominance 463
C.4 M-Matrices 465
C.4.1 Definition 465
C.4.2 M-Matrices and the Jacobi Iteration 466
C.4.3 M-Matrices and Diagonal Dominance 467
C.4.4 Further Criteria 469
C.5 H-Matrices 472
C.6 Schur Complement 472
Appendix DHierarchical Matrices 473
D.1 Introduction 473
D.1.1 Fully Populated Matrices 473
D.1.2 Rank-r Matrices 475
D.1.3 Model Format 476
D.2 Construction 479
D.2.1 Cluster Trees 479
D.2.2 Block Cluster Tree 482
D.2.3 Partition 482
D.2.4 Admissible Blocks 483
D.2.5 Use of Bounding Boxes for X? 484
D.2.6 Set of Hierarchical Matrices 485
D.2.7 H2-Matrices 485
D.2.8 Storage 485
D.2.9 Accuracy 487
D.3 Matrix Operations 489
D.3.1 Matrix-Vector Multiplication 489
D.3.2 Truncations 490
D.3.3 Addition 490
D.3.4 Agglomeration 491
D.3.5 Matrix-Matrix Multiplication 491
D.3.6 Inversion and LU Decomposition 492
Appendix EGalerkin Discretisation of Elliptic PDEs 493
E.1 Variational Formulation of Boundary Value Problems 493
E.2 Galerkin Discretisation 495
E.3 Subdomain Problems and Finite Element Matrix 497
E.4 Relations Between the Continuous and Discrete Problems 498
E.5 Error Estimates 500
E.6 Relations Between Two Discrete Problems 502
References 503
Index 520

Erscheint lt. Verlag 21.6.2016
Reihe/Serie Applied Mathematical Sciences
Zusatzinfo XXIII, 509 p. 26 illus., 11 illus. in color.
Verlagsort Cham
Sprache englisch
Themenwelt Mathematik / Informatik Mathematik
Technik
Schlagworte Analysis • iterative solution methods • matrices • matrix theory • Multigrid Method • Nonlinear Equations • Partial differential equations • Tensor-based Methods
ISBN-10 3-319-28483-5 / 3319284835
ISBN-13 978-3-319-28483-5 / 9783319284835
Haben Sie eine Frage zum Produkt?
Wie bewerten Sie den Artikel?
Bitte geben Sie Ihre Bewertung ein:
Bitte geben Sie Daten ein:
PDFPDF (Wasserzeichen)
Größe: 4,9 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Ein Übungsbuch für Fachhochschulen

von Michael Knorrenschild

eBook Download (2023)
Carl Hanser Verlag GmbH & Co. KG
CHF 16,60