Fundamentals of Convolutional Coding (eBook)
John Wiley & Sons (Verlag)
978-1-119-09867-6 (ISBN)
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field
- Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding
- Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes
- Distance properties of convolutional codes
- Includes a downloadable solutions manual
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes Distance properties of convolutional codes Includes a downloadable solutions manual
Rolf Johannesson is Professor Emeritus of Information Theory at Lund University, Sweden, and a Fellow of the IEEE. He was awarded the honor of Professor, honoris causa, from the Institute for Information Transmission Problems, Russian Academy of Sciences, and elected member of the Royal Swedish Academy of Engineering Sciences. Dr. Johannesson's research interests include information theory, coding theory, and cryptography. Kamil Sh. Zigangirov is Professor Emeritus of Telecommunication Theory at Lund University, Sweden, and a Fellow of the IEEE. He is widely published in the areas of information theory, coding theory, mathematical statistics, and detection theory. Dr. Zigangirov is the inventor of the stack algorithm for sequential decoding and the co-inventor of the LDPC convolutional codes.
CHAPTER 1
INTRODUCTION
1.1 WHY ERROR CONTROL?
The fundamental idea of information theory is that all communication is essentially digital−it is equivalent to generating, transmitting, and receiving randomly chosen bi nary digits, bits. When these bits are transmitted over a communication channel− or stored in a memory−it is likely that some of them will be corrupted by noise. In his 1948 landmark paper “A Mathematical Theory of Communication” [Sha48] Claude E. Shannon recognized that randomly chosen binary digits could (and should) be used for measuring the generation, transmission, and reception of information. Moreover, he showed that the problem of communicating information from a source over a channel to a destination can always be separated−without sacrificing optimality− into the following two subproblems: representing the source output efficiently as a sequence of binary digits (source coding) and transmitting binary, random, independent digits over the channel (channel coding). In Fig. 1.1 we show a general digital communication system. We use Shannon's separation principle and split the encoder and decoder into two parts each as shown in Fig. 1.2. The channel coding parts can be designed independently of the source coding parts, which simplifies the use of the same communication channel for different sources.
Figure 1.1 Overview of a digital communication system.
Figure 1.2 A digital communication system with separate source and channel coding.
To a computer specialist, “bit” and “binary digit” are entirely synonymous. In information theory, however, “bit” is Shannon's unit of information [Sha48, Mas82]. For Shannon, information is what we receive when uncertainty is reduced. We get exactly 1 bit of information from a binary digit when it is drawn in an experiment in which successive outcomes are independent of each other and both possible values, 0 and 1, are equiprobable; otherwise, the information is less than 1. In the sequel, the intended meaning of “bit” should be clear from the context.
Shannon's celebrated channel coding theorem states that every communication channel is characterized by a single parameter Ct, the channel capacity, such that Rt randomly chosen bits per second can be transmitted arbitrarily reliably over the channel if and only if Rt ⩽ Ct. We call Rt the data transmission rate. Both Ct and Rt are measured in bits per second. Shannon showed that the specific value of the signal-to-noise ratio is not significant as long as it is large enough, that is, so large that Rt ⩽ Ct holds; what matters is how the information bits are encoded. The information should not be transmitted one information bit at a time, but long information sequences should be encoded such that each information bit has some influence on many of the bits transmitted over the channel. This radically new idea gave birth to the subject of coding theory.
Error control coding should protect digital data against errors that occur during transmission over a noisy communication channel or during storage in an unreliable memory. The last decades have been characterized not only by an exceptional increase in data transmission and storage but also by a rapid development in micro-electronics, providing us with both a need for and the possibility of implementing sophisticated algorithms for error control.
Before we study the advantages of coding, we shall consider the digital communication channel in more detail. At a fundamental level, a channel is often an analog channel that transfers waveforms (Fig. 1.3). Digital data u0u1u2 …, where ui ∈ {0,1}, must be modulated into waveforms to be sent over the channel.
Figure 1.3 A decomposition of a digital communication channel.
In communication systems where carrier phase tracking is possible (coherent demodulation), phase-shift keying (PSK) is often used. Although many other modulation systems are in use, PSK systems are very common and we will use one of them to illustrate how modulations generally behave. In binary PSK (BPSK), the modulator generates the waveform
for the input 1 and s0(t) = −s1(t) for the input 0. This is an example of antipodal signaling. Each symbol has duration τ seconds and energy Es = ST, where S is the power and . The transmitted waveform is
Assume that we have a waveform channel such that additive white Gaussian noise (AWGN) n(t) with zero mean and two-sided power spectral density N0/2 is added to the transmitted waveform v(t), that is, the received waveform r(t) is given by
where
and
where E[·] and δ(·) denote the mathematical expectation and the delta function, respectively.
Based on the received waveform during a signaling interval, the demodulator produces an estimate of the transmitted symbol. The optimum receiver is a matched filter with impulse response
which is sampled each τ seconds (Fig. 1.4). The matched filter output Zi at the sample time iT,
is a Gaussian random variable N(μ, σ2) with mean
where the sign is + or − according to whether the modulator input was 1 or 0, respectively, and variance
Figure 1.4 Matched filter receiver.
After the sampler we can make a hard decision, that is, a binary quantization with threshold zero, of the random variable Zi. Then we obtain the simplest and most important binary-input and binary-output channel model, the binary symmetric channel (BSC) with crossover probability ∈ (Fig. 1.5). The crossover probability is of course closely related to the signal-to-noise ratio Es/N0. Since the channel output for a given signaling interval depends only on the transmitted waveform and noise during that interval and not on other intervals, the channel is said to be memoryless.
Figure 1.5 Binary symmetric channel.
Because of symmetry, we can without loss of generality assume that a 0, that is, , is transmitted over the channel. Then we have a channel “error” if and only if the matched filter output at the sample time iT is positive. Thus, the probability that Zi > 0 given that a 0 is transmitted is
where Zi is a Gaussian random variable, , and Es is the energy per symbol. Since the probability density function of Zi is
we have
where
is the complementary error function of Gaussian statistics (often called the Q-function).
When coding is used, we prefer measuring the energy per information bit, Eb, rather than per symbol. For uncoded BPSK, we have Eb = Es. Letting Pb denote the bit error probability (or bit error rate), that is, the probability that an information bit is erroneously delivered to the destination, we have for uncoded BPSK
How much better can we do with coding?
It is clear that when we use coding, it is a waste of information to make hard decisions. Since the influence of each information bit will be spread over several channel symbols, the decoder can benefit from using the value of Zi (hard decisions use only the sign of Zi) as an indication of how reliable the received symbol is. The demodulator can give the analog value of Zi as its output, but it is often more practical to use, for example, a three-bit quantization−a soft decision. By introducing seven thresholds, the values of Zi are divided into eight intervals and we obtain an eight-level soft-quantized discrete memoryless channel (DMC) as shown in Fig. 1.6.
Figure 1.6 Binary input, 8-ary output, DMC.
Shannon [Sha48] showed that the capacity of the bandlimited AWGN channel with bandwidth W is1
where N0/2 and S denote the two-sided noise spectral density and the signaling power, respectively. If the bandwidth W goes to infinity, we have
If we transmit K information bits during τ seconds, where τ is a multiple of bit duration T, we have
Since the data transmission rate is Rt = K/τ bits/s, the energy per bit can be written
Combining...
| Erscheint lt. Verlag | 3.8.2015 |
|---|---|
| Reihe/Serie | IEEE Press Series on Digital & Mobile Communication |
| IEEE Press Series on Digital & Mobile Communication | IEEE Series on Digital & Mobile Communication |
| Sprache | englisch |
| Themenwelt | Technik ► Elektrotechnik / Energietechnik |
| Technik ► Nachrichtentechnik | |
| Schlagworte | 0-7803-3483-3 • Communication technology • Communication Technology - Networks • convolutional coders • Convolutional coding • Drahtlose Kommunikation • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • Error control coding • IEEE • IEEE books • ieee series • Kommunikationsnetze • Kommunikationstechnik • LDPC Codes • LDPC convolutional codes • low-density parity-check codes • Mobile & Wireless Communications |
| ISBN-10 | 1-119-09867-X / 111909867X |
| ISBN-13 | 978-1-119-09867-6 / 9781119098676 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich