Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Information and Communication Theory (eBook)

(Autor)

eBook Download: EPUB
2019
John Wiley & Sons (Verlag)
978-1-119-43381-1 (ISBN)

Lese- und Medienproben

Information and Communication Theory - Stefan Host
Systemvoraussetzungen
109,99 inkl. MwSt
(CHF 107,45)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

An important text that offers an in-depth guide to how information theory sets the boundaries for data communication

In an accessible and practical style, Information and Communication Theory explores the topic of information theory and includes concrete tools that are appropriate for real-life communication systems. The text investigates the connection between theoretical and practical applications through a wide-variety of topics including an introduction to the basics of probability theory, information, (lossless) source coding, typical sequences as a central concept, channel coding, continuous random variables, Gaussian channels, discrete input continuous channels, and a brief look at rate distortion theory.

The author explains the fundamental theory together with typical compression algorithms and how they are used in reality. He moves on to review source coding and how much a source can be compressed, and also explains algorithms such as the LZ family with applications to e.g. zip or png.  In addition to exploring the channel coding theorem, the book includes illustrative examples of codes. This comprehensive text:

  • Provides an adaptive version of Huffman coding that estimates source distribution
  • Contains a series of problems that enhance an understanding of information presented in the text
  • Covers a variety of topics including optimal source coding, channel coding, modulation and much more
  • Includes appendices that explore probability distributions and the sampling theorem

Written for graduate and undergraduate students studying information theory, as well as professional engineers, master's students, Information and Communication Theory offers an introduction to how information theory sets the boundaries for data communication.



STEFAN HÖST is a Senior Lecturer in the Department of Electrical and Information Technology at Lund University, where he is active in the Broadband Communication research group. As a teacher he has been responsible for several courses within communication technology at the master's level. His main research area is within access networks seen from the physical layer. He is a frequent contributor to IEEE journals and to papers presented at multiple IEEE conferences.

STEFAN HÖST is a Senior Lecturer in the Department of Electrical and Information Technology at Lund University, where he is active in the Broadband Communication research group. As a teacher he has been responsible for several courses within communication technology at the master's level. His main research area is within access networks seen from the physical layer. He is a frequent contributor to IEEE journals and to papers presented at multiple IEEE conferences.

Preface ix

Chapter 1 Introduction 1

Chapter 2 Probability Theory 5

2.1 Probabilities 5

2.2 Random Variable 7

2.3 Expectation and Variance 9

2.4 The Law of Large Numbers 17

2.5 Jensen's Inequality 21

2.6 Random Processes 25

2.7 Markov Process 28

Problems 33

Chapter 3 Information Measures 37

3.1 Information 37

3.2 Entropy 41

3.3 Mutual Information 48

3.4 Entropy of Sequences 58

Problems 63

Chapter 4 Optimal Source Coding 69

4.1 Source Coding 69

4.2 Kraft Inequality 71

4.3 Optimal Codeword Length 80

4.4 Huffman Coding 84

4.5 Arithmetic Coding 95

Problems 101

Chapter 5 Adaptive Source Coding 105

5.1 The Problem with Unknown Source Statistics 105

5.2 Adaptive Huffman Coding 106

5.3 The Lempel-Ziv Algorithms 112

5.4 Applications of Source Coding 125

Problems 129

Chapter 6 Asymptotic Equipartition Property and Channel Capacity 133

6.1 Asymptotic Equipartition Property 133

6.2 Source Coding Theorem 138

6.3 Channel Coding 141

6.4 Channel Coding Theorem 144

6.5 Derivation of Channel Capacity for DMC 155

Problems 164

Chapter 7 Channel Coding 169

7.1 Error-Correcting Block Codes 170

7.2 Convolutional Code 188

7.3 Error-Detecting Codes 203

Problems 210

Chapter 8 Information Measures For Continuous Variables 213

8.1 Differential Entropy and Mutual Information 213

8.2 Gaussian Distribution 224

Problems 232

Chapter 9 Gaussian Channel 237

9.1 Gaussian Channel 237

9.2 Parallel Gaussian Channels 244

9.3 Fundamental Shannon Limit 256

Problems 260

Chapter 10 Discrete Input Gaussian Channel 265

10.1 M-PAM Signaling 265

10.2 A Note on Dimensionality 271

10.3 Shaping Gain 276

10.4 SNR Gap 281

Problems 285

Chapter 11 Information Theory and Distortion 289

11.1 Rate-Distortion Function 289

11.2 Limit For Fix Pb 300

11.3 Quantization 302

11.4 Transform Coding 306

Problems 319

Appendix A Probability Distributions 323

A.1 Discrete Distributions 323

A.2 Continuous Distributions 327

Appendix B Sampling Theorem 337

B.1 The Sampling Theorem 337

Bibliography 343

Index 347

CHAPTER 2
Probability Theory


One of the most important insights when setting up a measure for information is that the observed quantity must have multiple choices to contain information [4]. These multiple choices are best described by means of probability theory. This chapter will recapitulate the parts of probability theory that are needed throughout the rest of the text. It is not, in any way, a complete course, for that purpose the reader may refer to standard textbooks, e.g., [5, 6].

2.1 Probabilities


In short, a probability is a measure of how likely an event may occur. It is represented by a number between zero and one, where zero means it will not happen and one that it is certain to happen. The sum of the probabilities for all possible events is one, since it is certain that one of them will happen.

It was the Russian mathematician Kolmogorov who in 1933 proposed the axioms for the theory as it is known today. The sample space Ω is the set of all possible outcomes of the random experiment. For a discrete source, the sample space is denoted as

where ωi are the outcomes.

An event is a subset of the sample space, , and the event is said to occur if the outcome from the random experiment is found in the event. Examples of specific events are

  • The certain event Ω (the full subset) and
  • The impossible event ∅ (the empty subset of Ω).

Each of the outcomes in the sample space, ω1, ω2, …, ωn, are also called elementary events or, sometimes, atomic events. To each event, there is assigned a probability measure P, where 0 ⩽ P ⩽ 1, which is a measure of how probable the event is to happen. The probabilities of the certain event and the impossible event are

If and are two disjoint events (see Figure 2.1), then the probability for the union is

Figure 2.1 Two disjoint sets and , where .

(2.1)

This implies that

(2.2)

If the sets and are not disjoint, i.e., (see Figure 2.2), the probability for the union is

Figure 2.2 Two sets and , where .

(2.3)

where the probability for the intersection is represented in both and and has to be subtracted.

An important concept in probability theory is how two events are related. This is described by the conditional probability and is related to the probability of event , when it is known that event has occurred. Then, the atomic events of interest are those in both and , i.e., . Since the certain event now is , which should give probability 1, the conditional probability should be normalized with . The definition becomes

(2.4)

If the two events and are independent, the probability for should not change if it is conditioned on or not. Hence,

(2.5)

Applying this to the definition of conditional probabilities (2.4), it is concluded that and are independent if and only if the probability of their intersection is equal to the product of the individual probabilities:

(2.6)

2.2 Random Variable


A random variable, or a stochastic variable, X is a function from the sample space onto a specified set, most often the real numbers,

(2.7)

where , kn, is the set of possible values. To simplify notations, the event {ω|X(ω) = xi} is denoted by {X = xi}.

To describe the probability distribution of the random variable, for the discrete case the probability function is used, pX(x) = P(X = x), and for the continuous case the density function fX(x) is used. The distribution function for the variable is defined as

(2.8)

which can be derived as1

(2.9)
(2.10)

On many occasions later in the text, the index ( · )X will be omitted, if the intended variable is definite from the content. In Appendix A, the probability or density functions of some common probability distributions are listed together with some of their important properties.

To consider how two or more random variables are jointly distributed, view a vector of random variables. This vector will represent a multidimensional random variable and can be treated the same way. So, in the two-dimensional case, consider (X, Y), where X and Y are random variables with possible outcomes and , respectively. Then the set of joint outcomes is

(2.11)

Normally, for the discrete case, the joint probability function pX, Y(x, y) means the event . In the continuous case, the joint density function is denoted by fX, Y(x, y).

The marginal distribution is derived from the joint distribution as

(2.12)
(2.13)

for the discrete and continuous case, respectively. Analogous to (2.4), the conditional probability distribution is defined as

(2.14)
(2.15)

This gives a probability distribution for the variable X when the outcome for the variable Y is known. Repeating this iteratively concludes the chain rule for the probability of an n-dimensional random variable as

(2.16)

Combining the above results, two random variables X and Y are statistically independent if and only if

(2.17)
(2.18)

for all x and y.

If X and Y are two random variables and Z = X + Y their sum, then the distribution of Z is given by the convolution

(2.19)
(2.20)

for the discrete and continuous case, respectively.

Example 2.1

Consider two exponentially distributed random variables, and . Their density functions are

(2.21)
(2.22)

Let Z = X + Y, then the density function of Z is

(2.23)

If X and Y are equally distributed, and , the density function of Z = X + Y can be derived from (2.23) by using the limit value, given by l’Hospital’s rule,

(2.24)

as

(2.25)

which is the density function for an Erlang distributed variable, indicating that .

2.3 Expectation and Variance


Consider an example where a random variable X is the outcome of a roll with a fair die, i.e. the probabilities for each outcome are

(2.26)

A counterfeit version of the die can be obtained by inserting a small weight close to one side. Let Y be the a random variable, describing the outcome for this case. Simplifying the model, assume that the number one will never occur and number six will occur twice as often as before. That is, pY(1) = 0, , and . The arithmetic mean of the outcomes for both cases becomes

(2.27)

However, intuitively the average value of several consecutive rolls should be larger for the counterfeit die than the fair. This is not reflected by the arithmetic mean above. Therefore, introducing the expected value, which is a weighted mean where the probabilities are used as weights,

(2.28)

In the case with the fair die, the expected value is the same as the arithmetic mean, but for the manipulated die it becomes

(2.29)

Later, in Section 2.4, the law of large numbers is introduced, stating that the arithmetic mean of a series of outcomes from a random variables will approach the expected value as the length grows. In this sense, the expected value is a most natural definition of the mean for the outcome of a random variable.

A slightly more general definition of the expected value is as follows:

Definition 2.1

Let g(X) be a real-valued function of a random variable X, then the expected value of g(X) is

(2.30)
(2.31)

Returning to the true and counterfeit die, the function g(x) = x2 can be used. This will lead to the so-called second-order moment for the variable X. In the case for the true die

(2.32)

and for the counterfeit

(2.33)

Also other functions can be useful, for example, the Laplace transform of the density function can be expressed as

(2.34)

In (2.20), it is stated that the density function for Z = X + Y is the convolution of the density functions for X and Y. Often it is easier to perform a convolution as a multiplication in the transform plane,

(2.35)

For the discrete case, the -transform can be used in a similar way as

(2.36)

As an alternative to the...

Erscheint lt. Verlag 12.3.2019
Reihe/Serie IEEE Press Series on Digital & Mobile Communication
IEEE Press Series on Digital & Mobile Communication
IEEE Series on Digital & Mobile Communication
Sprache englisch
Themenwelt Technik Elektrotechnik / Energietechnik
Technik Nachrichtentechnik
Schlagworte Adaptive Huffman coding • Adaptive source coding • AEP and its consequences • channel coding • Communication technology • Convexity of information measures • Drahtlose Kommunikation • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • entropy of sequences • entropy rate of Markov models • Expectation and variance • Guide to Information Theory and Communication Engineering • huffman codes • Jensen's inequality • Kommunikationstechnik • Kraft inequality and optimal code word length • Lempel-Ziv Alogrithms • Markov process • Mobile & Wireless Communications • mutual information • optimal source coding • Probability Theory • random processes • Random Variable • Resource to Information and Communication Theory • Text on Information and Communication Theory • the law of lagre numbers • the problem of unknown source statistics • Understanding Information and Communication Theory
ISBN-10 1-119-43381-9 / 1119433819
ISBN-13 978-1-119-43381-1 / 9781119433811
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Kommunikationssysteme mit EIB/KNX, LON, BACnet und Funk

von Thomas Hansemann; Christof Hübner; Kay Böhnke

eBook Download (2025)
Hanser (Verlag)
CHF 38,95
Verfahren zur Berechnung elektrischer Energieversorgungsnetze

von Karl Friedrich Schäfer

eBook Download (2023)
Springer Fachmedien Wiesbaden (Verlag)
CHF 107,45