Markov Chains (eBook)
John Wiley & Sons (Verlag)
978-1-118-88269-6 (ISBN)
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features:
- Numerous exercises with solutions as well as extended case studies.
- A detailed and rigorous presentation of Markov chains with discrete time and state space.
- An appendix presenting probabilistic notions that are necessary to the reader, as well as giving more advanced measure-theoretic notions.
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies. A detailed and rigorous presentation of Markov chains with discrete time and state space. An appendix presenting probabilistic notions that are necessary to the reader, as well as giving more advanced measure-theoretic notions.
Carl Graham CNRS (France's National Center for Scientific Research) and Ecole Polytechnique, Palaiseau, France.
Chapter 1
First steps
1.1 Preliminaries
This book focuses on a class of random evolutions, in discrete time (by successive steps) on a discrete state space (finite or countable, with isolated elements), which satisfy a fundamental assumption, called the Markov property. This property can be described informally as follows: the evolution “forgets” its past and is “regenerated” at each step, retaining as sole past information for its future evolution its present state. The probabilistic description of such an evolution requires
- a law (probability measure) for drawing its initial state and
- a family of laws for drawing iteratively its state at the “next future instant” given its “present state,” indexed by the state space.
Such a random evolution will be called a Markov chain.
Precise definitions can be found in the Appendix, Section A.3, but we give now the probabilistic framework. A probability space will be considered throughout. When is discrete, usually its measurable structure is given by the collection of all subsets, and all functions with values in are assumed to be measurable.
A random variable (r.v.) with values in a measurable state space is a measurable function
Intuitively, the output varies randomly with the input , which is drawn in according to , and the measurability assumptions allow to assign a probability to events defined through .
For the random evolutions under investigation, the natural random elements are sequences taking values in the same discrete state space , which are called (random) chains or (discrete time) processes. Each should be an r.v., and its law on the discrete space is then given by
and hence can be identified in a natural way with .
Finite-dimensional marginals
More generally, for any and in , the random vector
takes values in the discrete space , and its law can be identified with the collection of the
All these laws for and constitute the family of the finite-dimensional marginals of the chain or of its law.
Law of the chain
The r.v.
takes values in , which is uncountable as soon as contains at least two elements. Hence, its law cannot, in general, be defined by the values it takes on the elements of . In the Appendix, Section A.3 contains some mathematical results defining the law of from its finite-dimensional marginals.
Section A.1 contains some more elementary mathematical results used throughout the book, and Section A.2 a discussion on the total variation norm and on weak convergence of laws.
1.2 First properties of Markov chains
1.2.1 Markov chains, finite-dimensional marginals, and laws
1.2.1.1 First definitions
We now provide rigorous definitions.
Definition 1.2.1
Let be a discrete state space. A matrix is a transition matrix on , or also a Markovian or stochastic matrix, if
A sequence of -valued random variables is a Markov chain on with matrix and initial law if, for every in and in ,
Note that, by iteration, is a Markov chain on with matrix if and only if for every in and in ,
and that this is trivially true if .
Markov chain evolution
A family of laws on indexed by is defined by
The evolution of can be obtained by independent draws, first of according to , and then iteratively of according to for without taking any further notice of the evolution before the present time or of its actual value.
Inhomogeneous Markov chains
A more general and complex evolution can be obtained by letting the law of the steps depend on the present instant of time, that is, using the analogous formulae with instead of ; this corresponds to a time-inhomogeneous Markov chain, but we will seldom consider this generalization.
Markov chain graph
The graph of the transition matrix , or of a Markov chain with matrix , is the oriented marked graph with nodes given by the elements of and directed links given by the ordered pairs of elements of such that marked by the value of . The restriction to the graph to in is of the form [if ]
The graph and the matrix are equivalent descriptors for the random evolution. The links from to in the graph are redundant as they are marked by , but illustrate graphically the possible transitions from .
1.2.1.2 Conditional formulations
The last formula in Definition 1.2.1 can be written as
which is often used as the definition. Moreover, if is nonnegative or bounded then
For the sake of mathematical efficiency and simplicity, nonconditional expressions will be stressed, before possibly being translated into equivalent conditional formulations. As an example, Definition 1.2.1 immediately yields by summing over that
which is not quite so obvious starting from (1.2.1).
1.2.1.3 Initial law, instantaneous laws
For a Markov chain , the law of of is called the instantaneous law at time and the initial law. The notations and implicitly imply that is given and arbitrary, and for some law on indicate that , and and indicate that . By linearity,
A frequent abuse of notation is to write , and so on.
Lemma 1.2.2
Let be a Markov chain with matrix and initial law . Then, for in and in , and the instantaneous laws are given by
or in matrix notation
Moreover, is a Markov chain with matrix the th matrix power of
Proof:
This follows readily from Definition 1.2.1.
1.2.1.4 Law on the canonical space of the chain
The notions in the Appendix, Section A.3.4, will now be used.
Definition 1.2.1 is actually a statement on the law of the Markov chain , which it characterizes by giving an explicit expression for its finite-dimensional marginals in terms of its initial law and transition matrix .
Indeed, some rather simple results in measure theory show that there is uniqueness of a law on the canonical probability space with product -field having a given finite-dimensional marginal collection.
It is immediate to check that this collection is consistent [with respect to (w.r.t.) projections] and then the Kolmogorov extension theorem (Theorem A.3.10) implies that there is existence of a law on the canonical probability space with the product -field such that the canonical (projection) process has the given finite-dimensional marginal collection, which hence is a Markov chain with initial law and transition matrix (see Corollary A.3.11).
The Kolmogorov extension theorem follows from a deep and general result in measure theory, the Caratheodory extension theorem.
1.2.2 Transition matrix action and matrix notation
1.2.2.1 Nonnegative and signed measures, total variation measure, andnorm
A (nonnegative) measure on is defined by (and can be identified with) a collection of nonnegative real numbers and, in the sense of nonnegative series,
A measure is finite if its total mass is finite and then for all . A measure is a probability measure, or a law, if .
For in , let and denote the nonnegative and nonpositive parts of , which satisfy and .
For with , the measures , , and can be defined term wise. Then,
is the minimal decomposition of as a difference of (nonnegative) measures, which have disjoint supports, and
is called the total variation measure of .
If is such that is finite (equivalently, if both and are finite), then we can extend it to a signed measure acting on subsets of by setting, in the sense of absolutely converging series,
and we can define its total variation norm by
Note that for all .
The space of all signed measures, furnished with the total variation norm, is a Banach space, which is isomorphic to the Banach space of summable sequences with its natural norm.
Probability measures or laws
The space of probability measures
is the intersection of the cone of nonnegative measures with the unit sphere. It is a closed subset of and hence is complete for the induced metric.
Some properties of and are developed in the Appendix, Section A.2. Note that, according to the definition taken here, nonnegative measures with infinite mass are not signed measures.
Complex measures
Spectral theory naturally involves complex extensions. For its purposes, complex measures can be readily defined, and the corresponding space , where the modulus in is again denoted by , allows to define a total variation measure and total variation norm for in . The Banach space is isomorphic to . The real and imaginary parts of a complex measure are signed measures.
1.2.2.2 Line and column vectors, measure-function duality
In matrix notation, the functions from to are considered as column vectors , and nonnegative or signed measures on as line vectors , of infinite lengths if is infinite. The integral of a function by a measure is denoted by , in accordance with the matrix product
defined in in the sense of nonnegative series if and and in in the sense of absolutely converging series if and , the Banach space of bounded functions on with the uniform norm.
For subset of , the indicator function is defined...
| Erscheint lt. Verlag | 2.4.2014 |
|---|---|
| Reihe/Serie | Wiley Series in Probability and Statistics |
| Wiley Series in Probability and Statistics | Wiley Series in Probability and Statistics |
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Mathematik ► Statistik |
| Mathematik / Informatik ► Mathematik ► Wahrscheinlichkeit / Kombinatorik | |
| Technik | |
| Schlagworte | Angewandte Mathematik • Applied mathematics • Chain • chain exercises • Character • Computational & Graphical Statistics • Condition • Engineering statistics • Evolution • Figures • First • Forward • instantaneous • Introduction • Ix • Laws • List • Markov • Markowsche Kette • Mathematics • Mathematik • Matrix • Network • Nomenclature • Population • Process • Properties • property • Random • Rechnergestützte u. graphische Statistik • Rechnergestützte u. graphische Statistik • Recursion • Search • space • Statistics • Statistik • Statistik in den Ingenieurwissenschaften |
| ISBN-10 | 1-118-88269-5 / 1118882695 |
| ISBN-13 | 978-1-118-88269-6 / 9781118882696 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich