Beruflich Dokumente
Kultur Dokumente
Nevio Benvenuto
University of Padova, Italy
Giovanni Cherubini
IBM Zurich Research Laboratory, Switzerland
c 2002
Copyright John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester,
West Sussex PO19 8SQ, England
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval
system or transmitted in any form or by any means, electronic, mechanical, photocopying,
recording, scanning or otherwise, except under the terms of the Copyright, Designs and
Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency
Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of
the Publisher. Requests to the Publisher should be addressed to the Permissions Department,
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ,
England, or emailed to permreq@wiley.co.uk, or faxed to (C44) 1243 770571.
Neither the author(s) nor John Wiley & Sons, Ltd accept any responsibility or liability for loss
or damage occasioned to any person or property through using the material, instructions methods or
ideas contained herein, or acting or refraining from acting as a result of such use. The author(s) and
Publisher expressly disclaim all implied warranties, including merchantability of fitness for any
particular purpose.
Designations used by companies to distinguish their products are often claimed as trademarks. In
all instances where John Wiley & Sons is aware of a claim, the product names appear in initial
capital or capital letters. Readers, however, should contact the appropriate companies for more
complete information regarding trademarks and registration.
John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA
John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809
John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1
Wiley also publishes its books in a variety of electronic formats. Some content that appears
in print may not be available in electronic books.
A catalogue record for this book is available from the British Library
ISBN 0-470-84389-6
Produced from LATEX files supplied by the authors, processed by Laserwords Private Limited, Chennai, India
Printed and bound in Great Britain by Biddles Ltd, Guildford and King’s Lynn
This book is printed on acid-free paper responsibly manufactured from sustainable forestry
in which at least two trees are planted for each one used for paper production.
To Adriana, and to Antonio, Claudia, and Mariuccia
Contents
Preface xxix
Acknowledgements xxxi
Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 103
Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 104
1.A Multirate systems : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 104
1.A.1 Fundamentals : : : : : : : : : : : : : : : : : : : : : : : : : : : 104
1.A.2 Decimation : : : : : : : : : : : : : : : : : : : : : : : : : : : : 106
1.A.3 Interpolation : : : : : : : : : : : : : : : : : : : : : : : : : : : : 109
1.A.4 Decimator filter : : : : : : : : : : : : : : : : : : : : : : : : : : 110
1.A.5 Interpolator filter : : : : : : : : : : : : : : : : : : : : : : : : : : 112
1.A.6 Rate conversion : : : : : : : : : : : : : : : : : : : : : : : : : : 113
1.A.7 Time interpolation : : : : : : : : : : : : : : : : : : : : : : : : : 116
Linear interpolation : : : : : : : : : : : : : : : : : : : : : : : 116
Quadratic interpolation : : : : : : : : : : : : : : : : : : : : : 118
1.A.8 The noble identities : : : : : : : : : : : : : : : : : : : : : : : : 118
1.A.9 The polyphase representation : : : : : : : : : : : : : : : : : : : 119
Efficient implementations : : : : : : : : : : : : : : : : : : : : 120
1.B Generation of Gaussian noise : : : : : : : : : : : : : : : : : : : : : : : 127
Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 158
Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 159
2.A The estimation problem : : : : : : : : : : : : : : : : : : : : : : : : : : 159
The estimation problem for random variables : : : : : : : : : 159
MMSE estimation : : : : : : : : : : : : : : : : : : : : : : : : 159
Extension to multiple observations : : : : : : : : : : : : : : : 160
MMSE linear estimation : : : : : : : : : : : : : : : : : : : : 161
MMSE linear estimation for random vectors : : : : : : : : : : 162
3 Adaptive transversal filters 165
3.1 Adaptive transversal filter: MSE criterion : : : : : : : : : : : : : : : : : 166
3.1.1 Steepest descent or gradient algorithm : : : : : : : : : : : : : : 166
Stability of the steepest descent algorithm : : : : : : : : : : : 168
Conditions for convergence : : : : : : : : : : : : : : : : : : : 169
Choice of the adaptation gain for fastest convergence : : : : : 170
Transient behavior of the MSE : : : : : : : : : : : : : : : : : 171
3.1.2 The least mean-square (LMS) algorithm : : : : : : : : : : : : : 173
Implementation : : : : : : : : : : : : : : : : : : : : : : : : : 173
Computational complexity : : : : : : : : : : : : : : : : : : : 175
Canonical model : : : : : : : : : : : : : : : : : : : : : : : : 175
Conditions for convergence : : : : : : : : : : : : : : : : : : : 175
3.1.3 Convergence analysis of the LMS algorithm : : : : : : : : : : : 177
Convergence of the mean : : : : : : : : : : : : : : : : : : : : 178
Convergence in the mean-square sense (real scalar case) : : : 179
Convergence in the mean-square sense (general case) : : : : : 180
Basic results : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
Observations : : : : : : : : : : : : : : : : : : : : : : : : : : 184
Final remarks : : : : : : : : : : : : : : : : : : : : : : : : : : 186
3.1.4 Other versions of the LMS algorithm : : : : : : : : : : : : : : : 186
Leaky LMS : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
Sign algorithm : : : : : : : : : : : : : : : : : : : : : : : : : 187
Sigmoidal algorithm : : : : : : : : : : : : : : : : : : : : : : 188
Normalized LMS : : : : : : : : : : : : : : : : : : : : : : : : 189
Variable adaptation gain : : : : : : : : : : : : : : : : : : : : 189
LMS for lattice filters : : : : : : : : : : : : : : : : : : : : : : 191
3.1.5 Example of application: the predictor : : : : : : : : : : : : : : : 191
3.2 The recursive least squares (RLS) algorithm : : : : : : : : : : : : : : : 197
Normal equation : : : : : : : : : : : : : : : : : : : : : : : : 198
Derivation of the RLS algorithm : : : : : : : : : : : : : : : : 199
Initialization of the RLS algorithm : : : : : : : : : : : : : : : 201
Recursive form of E min : : : : : : : : : : : : : : : : : : : : : 202
Convergence of the RLS algorithm : : : : : : : : : : : : : : : 203
Computational complexity of the RLS algorithm : : : : : : : : 203
Example of application: the predictor : : : : : : : : : : : : : 203
3.3 Fast recursive algorithms : : : : : : : : : : : : : : : : : : : : : : : : : 204
3.3.1 Comparison of the various algorithms : : : : : : : : : : : : : : 205
Contents xi
Index 1277
Preface
The motivation for this book is twofold. On the one hand, we provide a didactic tool
to students of communications systems. On the other hand, we present a discussion of
fundamental algorithms and structures for telecommunication technologies. The contents
reflect our experience in teaching courses on Algorithms for Telecommunications at the
University of Padova, Italy, as well as our professional experience acquired in industrial
research laboratories.
The text explains the procedures for solving problems posed by the design of systems
for reliable communications over wired or wireless channels. In particular, we focus on
fundamental developments in the field in order to provide the reader with the necessary
insight to design essential elements of various communications systems.
The book is divided into nineteen chapters. We briefly indicate four tracks corresponding
to specific areas and course work offered.
Track 1. Track 1 includes the basic elements for a first course on telecommunications,
which we regard as an introduction to the remaining tracks. It covers Chapter 1, which re-
calls fundamental concepts on signals and random processes, with an emphasis on second-
order statistical descriptions. A discussion of the characteristics of transmission media fol-
lows in Chapter 4. In this track we focus on the description of noise in electronic devices
and on the laws of propagation in transmission lines and radio channels. The representation
of waveforms by sequences of binary symbols is treated in Chapter 5; for a first course it
is suggested that emphasis be placed on PCM. Next, Chapter 6 examines the fundamental
principles of a digital transmission system, where a sequence of information symbols is
sent over a transmission channel. We refer to Shannon theorem to establish the maximum
bit rate that can be transmitted reliably over a noisy channel. Signal dispersion caused by a
transmission channel is then analyzed in Chapter 7. Examples of elementary and practical
implementations of transmission systems are presented, together with a brief introduction to
computer simulations. The first three sections of Chapter 11, where we introduce methods
for increasing transmission reliability by exploiting the redundancy added to the information
bits, conclude the first track.
in Chapter 2, as well as various applications of the Wiener filter, for example channel
identification and interference cancellation. These applications are further developed in the
first two sections of Chapter 16.
In the first part of Chapter 8, channel equalization is examined as a further applica-
tion of the Wiener filter. In the second part of the chapter, more sophisticated methods of
equalization and symbol detection, which rely on the Viterbi algorithm and on the forward-
backward algorithm, are analyzed. Initially single-carrier modulation systems are consid-
ered. In Chapter 9, we introduce multicarrier modulation techniques, which are preferable
for transmission over very dispersive channels and/or applications that require flexibility in
spectral allocation. In Chapter 10 spread spectrum systems are examined, with emphasis to
applications for simultaneous channel access by several users that share a wideband chan-
nel. The inherent narrowband interference rejection capabilities of spread spectrum systems,
as well as their implementations, are also discussed. This is followed by Chapter 18, which
illustrates specific modulation techniques developed for mobile radio applications.
Track 3. We observe the trend towards implementing transceiver functions using digital
signal processors. Therefore the algorithmic aspects of a transmission system are becoming
increasingly important. Hardware devices are assigned wherever possible only the functions
of analog front-end, fixed filtering, and digital-to-analog and analog-to-digital conversion.
This approach enhances the flexibility of transceivers, which can be utilized for more than
one transmission standard, and considerably reduces development time.
In line with the above considerations, Track 3 begins with a review of Chapters 2 and 3,
which illustrate the fundamental principles of transmission system design, and of Chapter 8,
which investigates individual building blocks for channel equalization and symbol detection.
The assumption that the transmission channel characteristics are known a priori is removed
in Chapter 15, where blind equalization techniques are discussed. Channel coding techniques
to improve the reliability of transmission are investigated in depth in Chapters 11 and 12.
A further method to mitigate channel dispersion is precoding. The operations of systems
that employ joint precoding and channel coding are explained in Chapter 13. Because of
electromagnetic coupling, the desired signal at the receiver is often disturbed by other
transmissions taking place simultaneously. Cancellation techniques to suppress interference
signals are treated in Chapter 16.
Track 4. Track 4 addresses various challenges encountered in designing wired and wire-
less communications systems. The elements introduced in Chapters 2 and 3, as well as the
algorithms introduced in Chapter 8, are essential for this track. The principles of multicar-
rier and spread spectrum modulation techniques, which are increasingly being adopted in
communications systems, are investigated in depth in Chapters 9 and 10, respectively. The
design of the receiver front-end, as well as various methods for timing and carrier recovery,
are dealt with in Chapter 14. Applications of interference cancellation and multi-user detec-
tion are addressed in Chapter 16. An overview of wired and wireless access technologies
appears in Chapter 17, and specific examples of system design are given in Chapters 18
and 19.
Acknowledgements
We gratefully acknowledge all who have made the realization of this book possible. In
particular, the editing of the various chapters would never have been completed without the
contributions of numerous students in our courses on Algorithms for Telecommunications.
Although space limitations preclude mentioning them all by name, we nevertheless express
our sincere gratitude.
We also thank Christian Bolis and Chiara Paci for their support in developing the software
for the book, Charlotte Bolliger and Lilli M. Pavka for their assistance in administering the
project, and Urs Bitterli and Darja Kropaci for their help with the graphics editing. For text
processing of the Italian version, the contribution of Barbara Sicoli was indispensable; our
thanks also go to Jane Frankenfield Zanin for her help in translating the text into English.
We are pleased to thank the following colleagues for their invaluable assistance through-
out the revision of the book: Antonio Assalini, Paola Bisaglia, Alberto Bononi, Giancarlo
Calvagno, Giulio Colavolpe, Roberto Corvaja, Elena Costa, Andrea Galtarossa, Antonio
Mian, Carlo Monti, Ezio Obetti, Riccardo Rahely, Roberto Rinaldo, Antonio Salloum,
Fortunato Santucci, Andrea Scaggiante, Giovanna Sostrato, Stefano Tomasin, and Luciano
Tomba. We gratefully acknowledge our colleague and mentor Jack Wolf for letting us in-
clude his lecture notes in the chapter on channel codes. A special acknowledgment goes
to our colleagues Werner Bux and Evangelos Eleftheriou of the IBM Zurich Research
Laboratory, and Silvano Pupolin of the University of Padua, for their continuing support.
Nevio Benvenuto
Giovanni Cherubini
To make the reading of the adopted symbols easier, a table containing the Greek alphabet
is included.
þ B beta ¾ 4 xi
0 gamma o O omicron
Ž 1 delta ³ 5 pi
H eta − T tau
, # 2 theta × Y upsilon
K kappa X chi
½ 3 lambda 9 psi
¼ M mu ! omega
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 1
In the present chapter we recall fundamental concepts of signal theory and random processes.
A majority of readers will simply find this chapter a review of known principles, while others
will find it a useful incentive for further in-depth study, for which we recommend the items
in the bibliography. In any event, we will begin with the definition of signal space and
its discrete representation, then move to the study of discrete-time linear systems (discrete
Fourier transforms, IIR and FIR impulse responses) and signals (complex representation of
passband signals and the baseband equivalent). We will conclude with the study of random
processes, with emphasis on the statistical estimation of first- and second-order ergodic
processes (periodogram, correlogram, ARMA, MA and especially AR models).
4. For each x, there is a unique vector x, called additive inverse, such that
x C .x/ D 0 (1.4)
6. Distributive laws
Þ.x C y/ D Þx C Þy (1.7)
.Þ C þ/x D Þx C þx (1.8)
Figure 1.1. Geometrical interpretation in the two-dimensional space of the sum of two vectors
and the multiplication of a vector by a scalar.
1 Later a discrete-time signal will be indicated simply as fx.k/g, omitting the indication of the sampling period.
In general, we will indicate by fxk g a sequence of real or complex numbers not necessarily generated at
instants kTc .
1.1. Signal space 3
Inner product
In an I -dimensional Euclidean space,2 given the two vectors x D [x1 ; : : : ; x I ]T and y D
[y1 ; : : : ; y I ]T , we indicate with hx; yi the inner product:
X
I
hx; yi D xi yiŁ (1.11)
i D1
If hx; yi is real, there is an important geometrical interpretation of the inner product in the
Euclidean space, represented in Figure 1.2, that is obtained from the relation:
where jjxjj denotes the norm or length of the vector x. Note that
X
I
hx; xi D jxi j2 D jjxjj2 (1.13)
i D1
Observation 1.1
From (1.12),
hx; yi
D jjxjj cos (1.14)
jjyjj
is the length of the projection of x onto y.
Definition 1.2
Two vectors x and y are orthogonal (x ? y) if hx; yi D 0, that is if the angle they form is 90Ž .
(I=2)
y
||y||
x
θ
||x||
Figure 1.2. Geometrical representation of the inner product between two vectors. jjxjj is the
norm of x, that is the vector length.
2 Henceforth: T stands for transpose, Ł for complex conjugate and H for transpose complex conjugate or
Hermitian.
4 Chapter 1. Elements of signal theory
We can extend these concepts to a signal space, defining the inner product as
X
C1
x.k/ y Ł .k/ (1.15)
kD1
for continuous-time signals. In both cases it is assumed that the energy of signals is finite.
Hence, for continuous-time signals it must be:
Z C1 Z C1
jx.t/j2 dt < 1 and jy.t/j2 dt < 1 (1.17)
1 1
In this text, Žn is the Kronecker delta, whereas Ž.t/ denotes the Dirac delta.
we want to express x.t/, t 2 <, as a linear combination of the functions fi .t/g, i 2 I.
Consider the signal
X
x.t/
O D xi i .t/ (1.20)
i 2I
Let
ci D hx; i i i 2I (1.23)
and express E e as
E e D hx x;
O x xi
O
D hx; xi hx;
O xi hx; xi
O C hx;
O xi
O (1.24)
X
D Ex C .xi xiŁ xi ciŁ xiŁ ci /
i 2I
as jxi j2 xi ciŁ xiŁ ci C jci j2 D jxi ci j2 D .xi ci /.xi ci /Ł . The minimum of (1.25) is
obtained if the second term is zero. Hence the coefficients to be determined are given by
Z C1
xi D ci D hx; i i D x.t/iŁ .t/ dt i 2I (1.26)
1
and
E e D E x E xO (1.27)
where
X
E xO D jxi j2 (1.28)
i 2I
If E e D 0, then fi .t/g, i 2 I, is a complete basis for x.t/, t 2 <, and x.t/ can be
expressed as
X
x.t/ D xi i .t/ (1.29)
i 2I
6 Chapter 1. Elements of signal theory
where equality must be intended in terms of quadratic norm. Moreover, from (1.29)
X
Ex D jxi j2 (1.30)
i 2I
that is the energy of the signal coincides with the sum of the squares of the coefficient
amplitudes.
O i i D xi xi D 0
he; i i D hx x; 8i 2 I (1.31)
that is e ? i , 8i 2 I.
As an example, for a generic signal x and for a basis formed by two signals 1 and 2 ,
one gets the geometrical representation in Figure 1.3.
Signal representation
Definition 1.3
The signal x.t/,
O t 2 <, given by (1.20) with fxi D ci g, i 2 I, is called the projection of
x.t/, t 2 <, onto the space spanned by the signals fi .t/g, i 2 I, that is, the space whose
signals are expressed as linear combinations of fi .t/g, i 2 I.
If E e D 0, then x.t/, t 2 <, belongs to the space spanned by fi .t/g, i 2 I. Therefore,
given a sequence of orthonormal signals, which form a complete basis for x.t/, the signal
x.t/ can be represented by a sequence of numbers fxi g, i 2 I, given by
xi D hx; i i (1.32)
φ2
e
^
x
φ
1
Figure 1.3. Geometrical representation of the projection of x onto the space spanned by 1
and 2 .
1.2. Discrete signal representation 7
In short, we have the following correspondence between a signal and its vector represen-
tation:
x.t/ t 2< ! x D [: : : ; x 1 ; x0 ; x1 ; : : :]T (1.33)
It is useful to analyze the inner product between signals in terms of the corresponding
vector representations. Let
X X
x.t/ D xi i .t/ and y.t/ D yi i .t/ (1.34)
i 2I i 2I
then
Z C1
hx; yi D x.t/ y Ł .t/ dt D hx; yi (1.35)
1
In particular,
Z C1 X
Ex D jx.t/j2 dt D jxi j2 D hx; xi D jjxjj2 D E x (1.36)
1 i 2I
!1
X 2
D jxi yi j2 D d.x; y/ (1.38)
i 2I
In other words, the Euclidean distance between two signals coincides with the Euclidean
distance between the corresponding vectors.
Moreover, the following relation holds:4
ð Ł
d 2 .x; y/ D E x C E y 2Re hx; yi (1.39)
4 The symbols Re [c] and Im [c] denote, respectively, the real and the imaginary part of c.
8 Chapter 1. Elements of signal theory
Example 1.2.1
Resorting to the sampling theorem, it can be shown that for a real valued signal x.t/, t 2 <,
with finite bandwidth B (see (1.140)), the sequence of functions
1
sin ³ .t i Tc /
Tc 1
i .t/ D Tc D (1.42)
1 2B
³ .t i Tc /
Tc
forms a complete orthogonal basis for x. The coefficients are given by the samples of x.t/
at the time instants t D i Tc ,
xi D x.i Tc / (1.43)
2. Let
20 .t/ D s2 .t/ hs2 ; 1 i1 .t/ (1.48)
it is easy to see that 20 ? 1 ; in fact,
h20 ; 1 i D hs2 hs2 ; 1 i1 ; 1 i D hs2 ; 1 i hs2 ; 1 i D 0 (1.49)
As illustrated in Figure 1.4, in (1.48) hs2 ; 1 i1 .t/ is the projection of s2 upon 1 .
Then, from (1.48)
0 .t/
2 .t/ D q2 (1.50)
E 20
1.2. Discrete signal representation 9
3. Let
30 .t/ D s3 .t/ hs3 ; 1 i1 .t/ hs3 ; 2 i2 .t/ (1.51)
X
i 1
i0 .t/ D si .t/ hsi ; j i j .t/ (1.52)
jD1
Observation 1.2
The set of fi .t/g is not unique, in any case the reciprocal distances between signals remain
unchanged.
Observation 1.3
The number of dimensions I of fi .t/g can be lower than M if the signals fsm .t/g, m D
1; : : : ; M, are linearly dependent, that is if there exists a set of coefficients, not all equal
to zero, such that
X
M
cm sm .t/ D 0 8t (1.54)
mD1
In such a case, it happens that in (1.52) for some i the signal fi0 .t/g is identically zero.
Obviously, a null signal cannot be an element of the basis.
Let us look at a few examples of discrete representation of a set of signals.
10 Chapter 1. Elements of signal theory
Example 1.2.2
For the three signals
8
< A sin 2³ t 0<t <
T
s1 .t/ D T 2 (1.55)
:
0 elsewhere
8
< A sin 2³ t 0<t <T
s2 .t/ D T (1.56)
:
0 elsewhere
8
< A sin 2³ t T
<t <T
s3 .t/ D T 2 (1.57)
:
0 elsewhere
which are depicted in Figure 1.5, an orthonormal basis, represented in Figure 1.6, is the
following:
8
< p2 sin 2³ t 0<t <
T
1 .t/ D T T 2 (1.58)
:
0 elsewhere
8
< p2 sin 2³ t T
<t <T
2 .t/ D T T 2 (1.59)
:
0 elsewhere
A A
s1(t)
s (t)
0 0
2
−A −A
0 T 0 T
t t
A
s (t)
0
3
−A
0 T
t
__ __
2/ \|T 2/ \|T
φ2(t)
φ1(t)
0 0
__ __
−2/ \|T −2/ \|T
0 T 0 T
t t
Moreover,
Ap
s1 .t/ D T 1 .t/ (1.60)
2
Ap Ap
s2 .t/ D T 1 .t/ C T 2 .t/ (1.61)
2 2
Ap
s3 .t/ D T 2 .t/ (1.62)
2
from which the correspondence between signals and their vector representation is (see
Figure 1.7):
½T
Ap
s1 .t/ ! s1 D T;0 (1.63)
2
½
Ap Ap T
s2 .t/ ! s2 D T; T (1.64)
2 2
½
Ap T
s3 .t/ ! s3 D 0; T (1.65)
2
φ2
A s3 s2
T
2
s1
0 A φ1
T
2
We note that the three signals are represented as a linear combination of only two
functions (I D 2).
Definition 1.4
The vector representation of a set of M signals is often called a signal constellation.
A A
s (t)
s (t)
0 0
1
−A −A
0 T/2 T 0 T/2 T
t t
A A
s (t)
s (t)
0 0
3
−A −A
0 T/2 T 0 T/2 T
t t
φ2
s2 s1
A
T
2
0 A φ1
T
2
s3 s4
and
8 r
>
< 2 sin.2³ f t/
0 0<t <T
2 .t/ D T (1.68)
>
:
0 elsewhere
x(t) y(t)
h
Figure 1.10. Analog filter as a time-invariant linear system with continuous domain.
Definition 1.5
We introduce two functions that will be extensively used:
8
< F
f 1 jfj<
rect D 2 (1.77)
F :
0 elsewhere
x.t/ X. f /
linearity a x.t/ C b y.t/ a X . f / C b Y. f /
duality X .t/ x. f /
time inverse x.t/ X . f /
complex conjugate x Ł .t/ X Ł . f /
x.t/ C x Ł .t/ 1
real part Re[x.t/] D [X . f / C X Ł . f /]
2 2
x.t/ x Ł .t/ 1
imaginary part Im[x.t/] D [X . f / X Ł . f /]
2j 2j
1 f
scaling x.at/, a 6D 0 X
jaj a
time shift x.t t0 / e j2³ f t0 X . f /
frequency shift x.t/ e j2³ f 0 t X . f f0 /
1 j'
modulation x.t/ cos.2³ f 0 t C '/ [e X . f f 0 / C e j' X . f C f 0 /]
2
1
x.t/ sin.2³ f 0 t C '/ [e j' X . f f 0 / e j' X . f C f 0 /]
2j
1 j'
Re[x.t/ e j .2³ f 0 tC'/ ] [e X . f f 0 / C e j' X Ł . f f 0 /]
2
d
differentiation x.t/ j2³ f X . f /
dt
Z t
1 X .0/
integration x.− / d− D 1 Ł x.t/ X. f / C Ž. f /
1 j2³ f 2
convolution [x.− / Ł y.− /].t/ X . f / Y. f /
correlation [x.− / Ł y Ł .− /].t/ X . f / Y Ł. f /
product x.t/ y.t/ [X ./ Ł Y./]. f /
real signal x.t/ D x Ł .t/ X . f / D X Ł . f /, X Hermitian, Re[X . f /]
even, Im[X . f /] odd, jX . f /j2 even
imaginary signal x.t/ D x Ł .t/ X . f / D X Ł . f /
real and even signal x.t/ D x Ł .t/ D x.t/ X . f / D X Ł . f / D X . f /, X real and even
real and odd signal x.t/ D x Ł .t/ D x.t/ X . f / D X Ł . f / D X . f /,
X imaginary and odd
Z C1 Z C1
Parseval’s theorem Ex D jx.t/j2 dt D
jX . f /j2 d f D E X
1 1
C1
X
C1
X
1 `
Poisson sum formula x.kTc / D X
kD1
Tc `D1
T c
16 Chapter 1. Elements of signal theory
sin.³ t/
sinc.t/ D (1.78)
³t
The following relation holds:
1 f
F[sinc.Ft/] D rect (1.79)
F F
Further examples of signals and relative Fourier transforms are given in Table 1.2. We
reserve the notation H .s/ to indicate the Laplace transform of h.t/, t 2 <:
Z C1
H .s/ D h.t/est dt (1.80)
1
with s complex variable; H .s/ is also called the transfer function of the filter. A class of
functions H .s/ often used in practice is characterized by the ratio of two polynomials in
s, each with a finite number of coefficients.
It is easy to observe that if the curve s D j2³ f in the s-plane belongs to the convergence
region of the integral in (1.80), then H. f / is related to H .s/ by
sinc(tF)
1/F·rect(f/F)
1/F
F/2 0 F/2 f
x.t/ X. f /
Ž.t/ 1
1 Ž. f /
e j2³ f 0 t Ž. f f 0 /
1
cos.2³ f 0 t/ [Ž. f f 0 / C Ž. f C f 0 /]
2
1
sin.2³ f 0 t/ [Ž. f f 0 / Ž. f C f 0 /]
2j
1 1
1.t/ Ž. f / C
2 j2³ f
1
sgn.t/
j³ f
t
rect T sinc. f T /
T
t
sinc T rect. f T /
T
jtj t
1 rect T sinc2 . f T /
T 2T
1
eat 1.t/, a > 0
a C j2³ f
1
t eat 1.t/, a > 0
.a C j2³ f /2
2a
eajtj , a > 0
a C .2³ f /2
2
r
2 ³ ³
eat , a > 0 exp ³ f 2
a a
x(k) y(k)
h
Tc Tc
The relation between the input sequence fx.k/g and the output sequence fy.k/g is given
by the convolution operation:
X
C1
y.k/ D [x.m/ Ł h.m/].k/ D h.k n/x.n/ (1.82)
nD1
We note the property that, for x.k/ D bk , where b is a complex constant, the output is
given by y.k/ D H .b/ bk . In Table 1.3 some further properties of the z-transform are
summarized. For discrete-time linear systems, in the frequency domain (1.82) becomes
Y. f / D X . f /H. f / (1.86)
where all functions are periodic of period 1=Tc .
Example 1.4.1
A fundamental example of z–transform is that of the sequence:
(
ak k½0
h.k/ D jaj < 1 (1.87)
0 k<0
Applying the transform formula (1.83) we find
1 z
H .z/ D D (1.88)
1 az 1 za
under the condition ja=zj < 1.
Example 1.4.2
Let q.t/, t 2 <, be a continuous-time signal with Fourier transform Q. f /, f 2 <. We now
consider the sequence obtained by sampling q.t/, that is
Using the Poisson formula of Table 1.1, one demonstrates that the Fourier transform of the
sequence fh k g is related to Q. f / by
1 X 1
1
H. f / D F[h k ] D H e j2³ f Tc
D Q f l (1.90)
Tc `D1 Tc
1 NX
1
gk D Gm W Nkm k D 0; 1; : : : ; N 1 (1.93)
N mD0
We note that, besides the factor 1=N , the expression of the inverse DFT (IDFT) coincides
with that of the DFT, provided W N1 is substituted with W N .
We also observe that direct computation of (1.92) requires N .N 1/ complex additions
and N 2 complex multiplications; however, the algorithm known as fast Fourier
transformÐ
(FFT) allows computation of the DFT by N log2 N complex additions and N2 log2 N N
complex multiplications.7
with elements [F]i;n D W Nin , i; n D 0; 1; : : : ; N 1. The inverse operator (IDFT) is given by
1 Ł
F1 D F (1.95)
N
p
We note that F D FT , and .1= N /F is a unitary matrix.8
The following property holds: if C is a right circulant square matrix whose rows are
obtained by successive shift to the right of the first row, then FCF1 is a diagonal matrix
whose elements are given by the DFT of the first row of C.
Introducing the vector formed by the samples of the sequence fgk g, k D 0; 1; : : : ; N 1,
gT D [g0 ; g1 ; : : : ; g N 1 ] (1.96)
and the vector of transform coefficients
G T D [G0 ; G1 ; : : : ; G N 1 ] D DFT[g] (1.97)
observing (1.92) it is immediate to verify the following relation:
G D Fg (1.98)
and
h.k/ D 0 k<0 k > N 1 (1.101)
We define the periodic signals of period L,
X
C1
xrep L .k/ D x.k `L/ (1.102)
`D1
and
X
C1
h rep L .k/ D h.k `L/ (1.103)
`D1
where in order to avoid time aliasing it must be
L ½ Lx ed L½N (1.104)
Definition 1.6
The circular convolution between x and h is a periodic sequence of period L defined as
L X
L1
y .circ/ .k/ D h x.k/ D h rep L .i/ xrepL .k i/ (1.105)
i D0
with main period corresponding to k D 0; 1; : : : ; L 1.
x(k) h(k)
0 Lx -1 k 0 1 N-1 k
Ym.circ/ D Xm Hm m D 0; 1; : : : ; L 1 (1.106)
where H is the column vector given by the L-point DFT of the sequence h.
We are often interested in the linear convolution between x and h given by (1.82):
X
N 1
y.k/ D x Ł h.k/ D h.i/x.k i/ (1.108)
i D0
L ½ Lx C N 1 (1.109)
then
To compute the convolution between the two finite-length sequences x and h, (1.109) and
(1.110) require that both sequences be completed with zeros (zero padding) to get a length
of L D L x C N 1 samples. Then, taking the L-point DFT of the two sequences, performing
the product (1.106), and taking the inverse transform of the result, one obtains the desired
linear convolution.
We give below two relations between the circular convolution y .circ/ and the linear
convolution y; in both cases we use
L D Lx (1.111)
with L > N .
Relation 1. We verify that the two convolutions y .circ/ and y coincide only for the instants
k D N 1; N ; : : : ; L 1, and we write
Indeed, with reference to Figure 1.14, the result of circular convolution coincides with
fy.k/g, output of the linear convolution, only for a delay k such that it is avoided the
product between non-zero samples of the two periodic sequences h r ep L and xr ep L , indicated
by ž and Ž, respectively. This is achieved only for k ½ N 1 and k L 1.
9 The notation diagfvg denotes a diagonal matrix whose elements on the diagonal are equal to the components
of the vector v.
1.4. Discrete-time linear systems 23
Figure 1.14. Illustration of the circular convolution operation between fx(k)g, k D 0; 1; : : : ; L1,
and fh(k)g, k D 0; 1; : : : ; N 1.
Let y . px/ be the linear convolution between x . px/ and h, with support fN px ; : : : ; L x C
N 2g. If N px ½ N 1, it is easy to prove the following relation
y . px/ .k/ D y .circ/ .k/ k D 0; 1; : : : ; L x 1 (1.114)
Let us define
(
y . px/ .k/ k D 0; 1; : : : ; L x 1
z.k/ D (1.115)
0 elsewhere
then from (1.114) and (1.106) the following relation between the corresponding L x –point
DFTs is obtained:
Zm D Xm Hm m D 0; 1; : : : ; L x 1 (1.116)
1. Loading
LN zeros
z }| {
h 0T
D [h.0/; h.1/; : : : ; h.N 1/; 0; : : : ; 0 ] (1.117)
x 0T
D [x.0/; x.1/; : : : ; x.N 1/; x.N /; : : : ; x.L 1/] (1.118)
2. Transform
3. Matrix product
Y 0 D X 0 H0 vector (1.121)
4. Inverse transform
N 1 terms
h i z }| {
y 0T
D DFT 1
Y 0T
D [ ]; : : : ; ] ; y.N 1/; y.N /; : : : ; y.L 1/] (1.122)
where we can set a0 D 1 without loss of generality. If the system is causal, (1.127) becomes
p
X q
X
y.k/ D an y.k n/ C bn x.k n/ k½0 (1.128)
nD1 nD0
and the transfer function for such systems assumes the form
q
X q
Y
bn z n b0 .1 zn z 1 /
Y .z/ nD0 nD1
H .z/ D D D (1.129)
X .z/ p
X p
Y
1C an z n
.1 pn z 1
/
nD1 nD1
where fzn g and fpn g are, respectively, the zeros and poles of H .z/. Equation (1.129)
generally defines an infinite impulse response (IIR) filter. In the case in which an D 0,
n D 1; 2; : : : ; p, (1.129) reduces to
q
X
H .z/ D bn z n (1.130)
nD0
and we obtain a finite impulse response (FIR) filter with h.n/ D bn , n D 0; 1; : : : ; q. To get
the impulse response coefficients, assuming known the z-transform H .z/, we can expand
H .z/ in partial fractions and apply the linear property of the z-transform (see Table 1.3,
page 19). If q < p and assuming that all poles are distinct, we obtain
8 p
p
X rn < X r pk
>
k½0
n n
H .z/ D H) h.k/ D nD1 (1.131)
1 pn z 1 >
:
nD1 0 k<0
where
Definition 1.7
A causal system is stable (bounded input-bounded output stability) if jpn j < 1, 8n.
26 Chapter 1. Elements of signal theory
Definition 1.8
The system is minimum phase (maximum phase) if jpn j < 1 and jzn j 1 (jpn j > 1 and
jzn j > 1), 8n.
Among all systems having the same magnitude response jH.e j2³ f Tc /j, the minimum
(maximum) phase system presents a phase response, argH.e j2³ f Tc /, which is below (above)
the phase response of all other systems.
Example 1.4.3
It is interesting to determine the phase of a system for a given impulse response. Let us
consider the system with transfer function H1 .z/ and impulse response h 1 .k/ shown in
Figure 1.15a. After determining the zeros of the transfer function, we factorize H1 .z/ as:
Y
4
H1 .z/ D b 0 .1 zn z 1 / (1.133)
nD1
As shown in Figure 1.15a, H1 .z/ is minimum phase. We now observe that the magnitude of
the frequency response does not change if 1=z nŁ is replaced with z n in (1.133). If we move
all the zeros outside the unit circle, we get a maximum-phase system H2 .z/ whose impulse
response is shown in Figure 1.15b. A general case, that is a transfer function with some
zeros inside and others outside the unit circle, is given in Figure 1.15c. The coefficients of
the impulse responses h 1 , h 2 , and h 3 are given in Table 1.4. The coefficients are normalized
so that the three impulse responses have equal energy.
We define the partial energy of a causal impulse response as
X
k
E.k/ D jh.i/j2 (1.134)
i D0
Comparing the partial-energy sequences for the three impulse responses of Figure 1.15, one
finds that the minimum (maximum) phase system yields the largest (smallest) fE.k/g. In
other words, the magnitude of the frequency responses being equal, a minimum (maximum)
phase system concentrates all its energy on the first (last) samples of the impulse response.
Extending our previous considerations also to IIR filters, if h 1 is a causal minimum-
phase filter, i.e. H1 .z/ D Hmin .z/ is a ratio of polynomials in z 1 with poles and zeros
Table 1.4 Impulse responses of systems having the same magnitude of the frequency
response.
Figure 1.15. Impulse response magnitudes and zero locations for three systems having the
same frequency response magnitude.
inside the unit circle, then Hmax .z/ D K Hmin Ł 1
z Ł , where K is a constant, is an anti-
causal maximum-phase filter, i.e. Hmax .z/ is a ratio of polynomials in z with poles and
zeros outside the unit circle.
In the case of a minimum-phase
FIR filter with impulse response h min .n/, n D 0; 1; : : : ; q,
H2 .z/ D z Hmin z Ł is a causal maximum-phase filter. Moreover, the relation fh 2 .n/g D
q Ł 1
fh Ł1 .q n/g, n D 0; 1; : : : ; q, is satisfied. In this text we use the notation fh 2 .n/g D fh 1BŁ .n/g,
where B is the backward operator that orders the elements of a sequence from the last to
the first.
28 Chapter 1. Elements of signal theory
In Appendix 1.A multirate transformations for systems are described, in which the time
domain of the input is different from that of the output. In particular, decimator and inter-
polator filters are introduced, together with their efficient implementations.
Let us consider a filter with impulse response h and frequency response H. If h assumes
real values, then H is Hermitian, H. f / D HŁ . f /, and jH. f /j is an even function.
Depending on the support of jH. f /j, the classification of Figure 1.16 is usually done. If
h assumes complex values, the terminology is less standard. We adopt the classification of
Figure 1.17, in which the filter is a lowpass filter (LPF) if the support jH. f /j includes the
origin, otherwise it is a passband filter (PBF).
Figure 1.16. Classification of real valued analog filters on the basis of the support of jH.f/j.
1.5. Signal bandwidth 29
Figure 1.17. Classification of complex valued analog filters on the basis of support of jH.f/j.
Analogously, for a signal x, we will use the same denomination and we will say that x
is a baseband (BB) or passband (PB) signal depending on whether the support of jX . f /j,
f 2 <, includes or not the origin.
Definition 1.10
In general, for a real-valued signal x, the set of positive frequencies such that jX . f /j 6D 0
is called passband or simply band B:
B D f f ½ 0 : jX . f /j 6D 0g (1.135)
11 The signal bandwidth may be given different definitions. Let us consider an LPF having frequency response
H. f /. The filter gain H0 is usually defined as H0 D jH.0/j; other definitions are as average gain of the filter
in the passband B, or as max f jH. f /j. We give the following four definitions for the bandwidth B of h.
a) First zero:
B D minf f > 0 : H. f / D 0g (1.136)
30 Chapter 1. Elements of signal theory
For example, with regard to the signals of Figure 1.16, we have that for an LPF B D f 2 ,
whereas for a PBF B D f 2 f 1 . In the case of a complex-valued signal x, B is equivalent
to the support of X , and B is thus given by the measure of the entire support.
For discrete-time filters, for which H is periodic of period 1=Tc , the same definitions
hold, with the caution of considering the support of jH. f /j within a period, let’s say
between 1=.2Tc / and 1=.2Tc /. In the case of discrete-time highpass filters (HPF), the
passband will extend from a certain frequency f 1 to 1=.2Tc /.
As discrete-time signals are often obtained by sampling continuous-time signals, we will
state the following fundamental theorem.
h k D q.kTc / (1.141)
univocally represent the signal q.t/, t 2 <, under the condition that the sampling frequency
1=Tc satisfies the relation
1
½ B0 (1.142)
Tc
For the proof, which is based on the relation (1.90) between a signal and its samples, we
refer the reader to [1].
B0 is often referred to as the minimum sampling frequency. If 1=Tc < B0 the sig-
nal cannot be perfectly reconstructed from its samples, originating the so-called aliasing
phenomenon in the frequency-domain signal representation.
Figure 1.18 illustrates the various definitions for a particular jH. f /j.
1.5. Signal bandwidth 31
B3dB
−10
Breq
−20
−30
−40
|H(f)| (dB)
−50 B
z
−60
B40dB
−70
BE (p=90)
−80
B50dB
−90 BE (p=99)
−100
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
f (Hz)
Figure 1.18. The real signal bandwidth following the definitions of:
1) Bandwidth at first zero: Bz D 0:652 Hz.
2) Amplitude-based bandwidth: B3 dB D 0:5 Hz, B40 dB D 0:87 Hz, B50 dB D 1:62 Hz.
3) Energy-based bandwidth: BE.pD90/ D 1:362 Hz, BE.pD99/ D 1:723 Hz.
4) Equivalent noise bandwidth: Breq D 0:5 Hz.
In turn, the signal q.t/, t 2 <, can be reconstructed from its samples fh k g according to
the scheme of Figure 1.19, where it is employed as an interpolation filter having an ideal
frequency response given by
(
1 f 2B
GI . f / D (1.143)
0 elsewhere
We note that for real-valued baseband signals B0 D 2B. For passband signals, care must
be taken in the choice of B0 ½ 2B to avoid aliasing between the positive and negative
frequency components of Q. f /.
32 Chapter 1. Elements of signal theory
Figure 1.20. Characteristics of a filter satisfying the conditions for the absence of signal
distortion in the frequency interval (f1 ; f2 ).
12 For a complex number c, arg c denotes the phase of c (see note 3, page 441).
1.6. Passband signals 33
PB filter. Referring to Figure 1.21 and to the transformations illustrated in Figure 1.22,
given x we extract its positive frequency components using an analytic filter or phase
splitter, h .a/ , having the following ideal frequency response
(
2 f >0
H.a/ . f / D 2 Ð 1. f / D (1.150)
0 f <0
Figure 1.21. Illustration of transformations to obtain the baseband equivalent signal x.bb/
around the carrier frequency f0 using a phase splitter.
34 Chapter 1. Elements of signal theory
phase splitter
x(t) x (a) (t) x(bb)(t)
h(a)
-j2 πf 0 t
e
Figure 1.22. Transformations to obtain the baseband equivalent signal x.bb/ around the
carrier frequency f0 using a phase splitter.
frequency shifted by f 0 to obtain a BB signal, x .bb/ . The signal x .bb/ is the baseband
equivalent of x, also called complex envelope of x around the carrier frequency f 0 .
Analytically, we have
F
x .a/ .t/ D x Ł h .a/ .t/ ! X .a/ . f / D X . f /H.a/ . f /
(1.151)
F
x .bb/ .t/ D x .a/ .t/ e j2³ f 0 t ! X .bb/ . f / D X .a/ . f C f 0 /
(1.152)
and in the frequency domain
(
.bb/ 2X . f C f 0 / for f > f 0
X .f/ D (1.153)
0 for f < f 0
In other words, x .bb/ is given by the components of x at positive frequencies, scaled by 2
and frequency shifted by f 0 .
BB filter. One gets the same result using a frequency shift of x followed by a lowpass
filter (see Figures 1.23 and 1.24). It is immediate to determine the relation between the
frequency responses of the filters of Figure 1.21 and Figure 1.23:
H. f / D H.a/ . f C f 0 / (1.154)
From (1.154) one can derive the relation between the impulse response of the analytic filter
and the impulse response of the lowpass filter:
h .a/ .t/ D h.t/ e j2³ f 0 t (1.155)
Figure 1.23. Illustration of transformations to obtain the baseband equivalent signal x.bb/
around the carrier frequency f0 using a lowpass filter.
LPF
x(t) x(bb)(t)
h
-j2 πf 0 t
e
Figure 1.24. Transformations to obtain the baseband equivalent signal x.bb/ around the
carrier frequency f0 using a lowpass filter.
e j2π f0 t
Figure 1.25. Relation between a signal, its complex envelope and the analytic signal.
36 Chapter 1. Elements of signal theory
Magnitude and phase of H.h/ . f / are shown in Figure 1.28. We note that h .h/ phase-shifts by
³=2 the positive-frequency components of the input and by ³=2 the negative-frequency
components. In practice these filter specifications are imposed only on the passband of
the input signal.13 To simplify the notation, in block diagrams a Hilbert filter is indicated
as “³=2”.
cos(2 π f 0 t)
x (bb) (t)
I
x(t)
x (bb) (t)
Q
-sin(2 π f 0 t)
13 We note that the ideal Hilbert filter in Figure 1.28 has an impulse response given by (see Table 1.2 on page 17):
1
h .h/ .t/ D (1.164)
³t
1.6. Passband signals 37
Comparing the frequency responses of the analytic filter (1.150) and of the Hilbert filter
(1.163), we obtain the relation
Consequently, if x is the input signal, the output of the Hilbert filter (also denoted as Hilbert transform of x) is
Z C1
1 x.− /
x .h/ .t/ D d− (1.165)
³ 1 t −
38 Chapter 1. Elements of signal theory
| H (h)(f)|
1
0 f
0 f
π
−
2
Figure 1.28. Magnitude and phase responses of the ideal Hilbert filter.
Then, letting
x .h/ .t/ D x Ł h .h/ .t/ (1.168)
the analytic signal can be expressed as
x .a/ .t/ D x.t/ C j x .h/ .t/ (1.169)
Consequently, from (1.152), (1.160) and (1.161):
Moreover, noting that from (1.163) . j sgn f /. j sgn f / D 1, taking the Hilbert transform of a signal we
get the initial signal with the sign changed. Then it results:
Z C1 .h/
1 x .− /
x.t/ D d− (1.166)
³ 1 t −
14 We recall that the design of a filter, and in particular of a Hilbert filter, requires the introduction of a suitable
delay. In other words, we are only able to produce an output with a delay t D , x .h/ .t t D /. Consequently, in
the block diagram of Figure 1.27, also x.t/ and the various sinusoidal waveforms must be delayed.
1.6. Passband signals 39
Example 1.6.1
Let x.t/ be a sinusoidal signal,
Then
A j'0 A
X. f / D e Ž. f f 0 / C e j'0 Ž. f C f 0 / (1.173)
2 2
The analytic signal is given by:
F 1
X .a/ . f / D Ae j'0 Ž. f f 0 / ! x .a/ .t/ D Ae j'0 e j2³ f 0 t
(1.174)
and
F 1
X .bb/ . f / D Ae j'0 Ž. f / ! x .bb/ .t/ D Ae j'0
(1.175)
We note that we have chosen as reference carrier frequency of the complex envelope the
same carrier frequency as in (1.172).
Example 1.6.2
Let
and
Another analytical technique to get the expression of the signal after the various transfor-
mations is obtained by applying the following theorem.
40 Chapter 1. Elements of signal theory
X (f)
A
2B
B − f0 B 0 B f0 B f R
∋
−f 0 − −f 0 + f0 − f0 +
2 2 X (bb)(f) 2 2
A
B
∋
B 0 B f R
−
2 2
Theorem 1.1
Let the product of two real signals be
Proof. We consider the general relation (1.157), valid for every real signal
1 .a/ 1 .a/Ł
c.t/ D 2 c .t/ C 2 c .t/ (1.182)
In the frequency domain the support of the first term in (1.183) is given by the interval
[ f 0 B; C1/, while that of the second is equal to .1; f 0 C B]. Under the hypothesis
that f 0 ½ B, the two terms in (1.183) have disjoint support in the frequency domain and
(1.181) is immediately obtained.
Corollary 1.1
From (1.181) we obtain
and
An interesting application of (1.186) is in the design of a Hilbert filter h .h/ starting from
a lowpass filter h. In fact, from (1.155) and (1.186) we get
h .h/ .t/ D h.t/ sin.2³ f 0 t/ (1.188)
Example 1.6.3
Let a modulated double sideband (DSB) signal be expressed as
x.t/ D a.t/ cos.2³ f 0 t C '0 / (1.189)
where a is a BB signal with bandwidth B. Then, if f 0 > B, from the above theorem we
have the following relations:
x .a/ .t/ D a.t/e j .2³ f 0 tC'0 / (1.190)
x .h/ .t/ D a.t/ sin.2³ f 0 t C '0 / (1.191)
.bb/
x .t/ D a.t/e j'0
(1.192)
We list in Table 1.5 some properties of the Hilbert transformation (1.168) that are easily
obtained by using the Fourier transform and the properties of Table 1.1.
" ! #
.bb/e j'1 eC j .2³ 2 f 0 − C'1 /
D Re h.− / Ł x .− / C h.− / Ł x .bb/ .− / .t/ (1.193)
2 2
½
e j'1
D Re h Ł x .bb/ .t/
2
where the last equality follows because the term with frequency components around 2 f 0 is
filtered by the LPF.
.bb/
We note, moreover, that the filter h .bb/ in Figure 1.30c has in-phase component h I
and quadrature component h .bb/
Q that are related to H.a/ by (see (1.160) and (1.161))
H.bb/
I . f / D 2 [H
1 .bb/
. f / C H.bb/Ł . f /]
(1.194)
D 12 [H.a/ . f C f 0 / C H.a/Ł . f C f 0 /]
and
1
H.bb/
Q .f/ D [H.bb/ . f / H.bb/Ł . f /]
2j
(1.195)
1
D [H.a/ . f C f 0 / H.a/Ł . f C f 0 /]
2j
H.bb/ .a/
I . f / D Ha . f C f 0 /
and
H.bb/
Q .f/ D 0
2. Instantaneous phase
3. Instantaneous frequency
1 d
f x .t/ D 'x .t/ (1.198)
2³ dt
In terms of the complex envelope signal x .bb/ , from (1.152) the equivalent relations follow:
Two simplified methods to get the envelope Mx .t/ from the PB signal x.t/ are given in
Figure 6.58 on page 514. For example if x.t/ D A cos.2³ f 0 t C '0 / it follows that
Mx .t/ D A (1.203)
'x .t/ D 2³ f 0 t C '0 (1.204)
f x .t/ D f 0 (1.205)
2. Phase deviation
1'x .t/ D 'x .t/ .2³ f 0 t C '0 / D arg x .bb/ .t/ '0 (1.207)
3. Frequency deviation
1 d
1 f x .t/ D f x .t/ f 0 D 1'x .t/ (1.208)
2³ dt
1.7.1 Correlation
Let x.t/ and y.t/, t 2 <, be two continuous-time random processes. We indicate the delay
or lag with − and the expectation operator with E.
1. Mean value
mx .t/ D E[x.t/] (1.210)
2. Statistical power
Mx .t/ D E[jx.t/j2 ] (1.211)
3. Autocorrelation
rx .t; t − / D E[x.t/x Ł .t − /] (1.212)
4. Cross-correlation
rx y .t; t − / D E[x.t/y Ł .t − /] (1.213)
5. Autocovariance
cx .t; t − / D E[.x.t/ mx .t//.x.t − / mx .t − //Ł ]
(1.214)
D rx .t; t − / mx .t/mŁx .t − /
6. Cross-covariance
cx y .t; t − / D E[.x.t/ mx .t//.y.t − / m y .t − //Ł ]
(1.215)
D rx y .t; t − / mx .t/mŁy .t − /
Observation 1.4
ž x and y are orthogonal if rx y .t; t − / D 0, 8t; − . In this case we write x ? y.15
ž x and y are uncorrelated if cx y .t; t − / D 0, 8t; − .
ž if at least one of the two random processes has zero mean, orthogonality is equivalent
to uncorrelation.
ž x is wide-sense stationary (WSS) if
1. mx .t/ D mx , 8t,
2. rx .t; t − / D rx .− /, 8t.
15 We observe that the notion of orthogonality between two random processes is quite different from that of
orthogonality between two deterministic signals. In fact, while in the deterministic case, based on Definition 1.2,
it is sufficient that the inner product of the signals is zero, in the random case the cross-correlation must be
zero for all the delays and not only for zero delay.
In particular, we note that the two random variables v1 and v2 are orthogonal if condition E[v1 v2Ł ] D 0 is
satisfied.
46 Chapter 1. Elements of signal theory
4. rx y .− / D rŁyx .− /.
5. rx Ł .− / D rŁx .− /.
hence the name PSD for the function Px . f /: it represents the distribution of the statistical
power in the frequency domain.
The pair of equations (1.216) and (1.217) are obtained from the Wiener–Khintchine
theorem [2].
Definition 1.11
The passband B of a random process x is defined with reference to its PSD function.
1.7. Second-order analysis of random processes 47
Theorem 1.2
The PSD of a WSS process, Px , can be uniquely decomposed into a component Px.c/
with no impulses and a discrete component consisting of impulses (spectral lines) Px.d/ ,
so that
rx .− / D r.c/ .d/
x .− / C r x .− / (1.221)
with
X
r.d/
x .− / D Mi e j2³ fi − (1.222)
i 2I
The most interesting consideration is that the following random process decomposition
corresponds to the decomposition (1.219) of the PSD:
where x .c/ and x .d/ are orthogonal processes having PSD functions
where fxi g are orthogonal random variables (r.v.s) having statistical power
E[jxi j2 ] D Mi i 2I (1.226)
Observation 1.5
The spectral lines of the PSD identify the periodic components in the process.
Definition 1.12
A WSS random process is said to be asymptotically uncorrelated if the following two
properties hold:
1/ lim rx .− / D jmx j2 (1.227)
− !1
1. Determine the frequency response of the various outputs in terms of the inputs. In
our specific case, we have
Y1 D H1 X1 C H2 X2 (1.234)
Y2 D H2 H3 X2 (1.235)
Y1 Y1Ł D jH1 j2 X1 X1Ł C jH2 j2 X2 X2Ł C H1 H2Ł X1 X2Ł C H1Ł H2 X1Ł X2 (1.236)
Y2 Y2Ł D jH2 H3 j2 X2 X2Ł (1.237)
Y1 Y2Ł D H1 H2Ł H3Ł X1 X2Ł C jH2 j2 H3Ł X2 X2Ł (1.238)
3. Substitute the expressions of the products in the previous equations using the rule
Yi Y Łj ! P yi y j (1.239)
X` XmŁ ! Px` xm (1.240)
Then
x1 (t) y1 (t)
h1
x2 (t) y2 (t)
h2 h3
y
h
z
g
Definition 1.15
If the samples of the random process fx.k/g are statistically independent and identically
distributed we say that fx.k/g has i.i.d. samples.
X
C1
`
Px.d/ . f / D jmx j 2
Ž f (1.253)
`D1
Tc
We note that, if the process has non-zero mean value, the PSD exhibits lines at multiples
of 1=Tc .
Example 1.7.1
We calculate the PSD of an i.i.d. sequence fx.k/g. From
(
Mx nD0
rx .n/ D 2
(1.254)
jmx j n 6D 0
it follows that
(
¦x2 nD0
cx .n/ D (1.255)
0 n 6D 0
Then
r.c/
x .n/ D ¦x Žn
2
r.d/
x .n/ D jmx j
2
(1.256)
X
C1
`
Px.c/ . f / D ¦x2 Tc Px.d/ . f / D jmx j2 Ž f (1.257)
`D1
Tc
52 Chapter 1. Elements of signal theory
From the comparison of (1.258) with (1.247), the PSD of x.k/ is related to Px .z/ by
Px . f / D Tc Px .e j2³ f Tc / (1.259)
Using Table 1.3, we obtain the relations between ACS and PSD listed in Table 1.6. Let the
deterministic autocorrelation of h be defined as 16
X
C1
rh .n/ D h.k/h Ł .k n/ D [h.m/ Ł h Ł .m/].n/ (1.260)
kD1
ACS PSD
r yx .n/ D rx Ł h.n/ Pyx .z/ D Px .z/H .z/
rx y .n/ D [rx .m/ Ł h Ł .m/].n/ Px y .z/ D Px .z/H Ł .1=z Ł /
r y .n/ D rx y Ł h.n/ Py .z/ D Px y .z/H .z/
D rx Ł rh .n/ D Px .z/H .z/H Ł .1=z Ł /
16 In this text we use the same symbol to indicate the correlation between random processes and the correlation
between deterministic functions.
1.7. Second-order analysis of random processes 53
and
P y . f / D Tc ¦x2 jH .e j2³ f Tc /j2 (1.264)
In other words, P y . f / has the same shape as the filter frequency response.
In the case of real filters
1
H Ł Ł D H .z 1 / (1.265)
z
Among the various applications of (1.264), it is worth mentioning the process synthesis,
which deals with the generation of a random process having a pre-assigned PSD. Two
methods are shown in Section 4.6.6.
For the same input x, the output of the two filters is respectively x .C/ and x ./ . We find that
with x ./ .t/ D x .C/Ł .t/. The following relations are valid
and
as x .C/ and x ./ have non-overlapping passbands. Then x .C/ ? x ./ , hence (1.271) yields
where Px ./ . f / D Px .C/Ł . f / D Px .C/ . f /, using Property 5 of the PSD. The analytic
signal x .a/ is equal to 2x .C/ , hence
and
hence
Px .bb/ . f / D Px .a/ . f C f 0 / D 4Px .C/ . f C f 0 / (1.280)
1
Px .bb/ x .bb/ . f / D [P .bb/ . f / Px .bb/ . f /] (1.288)
Q I 4j x
rx .bb/ x .bb/ .− / D rx .bb/ x .bb/ .− / D rx .bb/ x .bb/ .− / (1.289)
I Q Q I I Q
gets x I.bb/ ? x Q
.bb/
only if Px .bb/ is an even function; in any case the random variables
x I.bb/ .t/ and x Q
.bb/
.t/ are always orthogonal since rx .bb/ x .bb/ .0/ D 0. Referring to the block
I Q
diagram in Figure 1.27b, as
Px .h/ . f / D Px . f / and Px .h/ x . f / D j sgn. f / Px . f / (1.290)
one gets
and
Example 1.7.2
Let x be a WSS process with power spectral density
½
N0 f f0 f C f0
Px . f / D rect C rect (1.298)
2 B B
depicted in Figure 1.33. It is immediate to get
f f0
Px .a/ . f / D 2N0 rect (1.299)
B
and
f
Px .bb/ . f / D 2N0 rect (1.300)
B
Then
1 f
Px .bb/ . f / D Px .bb/ . f / D P .bb/ . f / D N0 rect (1.301)
I Q 2 x B
Cyclostationary processes
In short, we have seen that, if x is a real passband WSS process, then its complex envelope
is WSS, and x .bb/ ? x .bb/Ł . The converse is also true: if x .bb/ is a WSS process and
x .bb/ ? x .bb/Ł , then
P x (f)
N0
2
B - f0 B 0 B f0 B f R ∋
- f0 - - f0 + f0 - f0 +
2 2 2 2
P x (a) (f)
2N 0
0 B f0 B f R ∋
f0 - f0 +
2 2
2N 0 P x(bb) (f)
∋
B 0 B f R
-
2 2
N0 P x(bb) (f) , P x(bb) (f)
I Q
∋
B 0 B f R
-
2 2
where
In (1.307) F− denotes the Fourier transform with respect to the variable − . In our case, it is
Example 1.7.3
Let x.t/ be a modulated DSB signal (see (1.189))
with a.t/ real random BB WSS process with bandwidth Ba < f 0 and autocorrelation ra .− /.
From (1.192) it results in x .bb/ .t/ D a.t/ e j'0 . Hence we have
Because ra .− / is not identical to zero, observing (1.303) we find that x.t/ is cyclostationary
with period 1= f 0 . From (1.308) the average PSD of x is given by
PN x . f / D 14 [Pa . f f 0 / C Pa . f C f 0 /] (1.311)
We note that one finds the same result (1.311) assuming that '0 is a uniform r.v. in [0; 2³ /;
in this case x turns out to be WSS.
Example 1.7.4
Let x.t/ be a modulated single sideband (SSB) with an upper sideband,
where a .h/ .t/ is the Hilbert transform of a.t/, a real WSS random process with autocorre-
lation ra .− / and bandwidth Ba .
1.7. Second-order analysis of random processes 59
We note that the modulating signal .a.t/ C ja .h/ .t// coincides with the analytic sig-
nal a .a/ .t/ and it has spectral support only for positive frequencies, therefore it is one
half of a.t/.
Being
where a .C/ .t/ is defined in (1.271). In this case x has bandwidth equal to Ba and statistical
power given by
1
Mx D 4 Ma (1.316)
where the signal x is modulated DSB (1.309). To obtain the signal a.t/ from r.t/, one can
use the coherent demodulation scheme illustrated in Figure 1.34 (see Figure 1.30b) where
h is an ideal lowpass filter, having a frequency response
f
H. f / D H0 rect (1.318)
2Ba
Let ro be the output signal of the demodulator, given by the sum of the desired part xo and
noise wo :
We evaluate now the ratio between the powers of the signals in (1.319),
Mxo
3o D (1.320)
M wo
in terms of the reference ratio
Mx
0D (1.321)
.N0 =2/ 2Ba
Using the equivalent block scheme of Figure 1.34 and (1.192), we have
it results
xo .t/ D Re[h./ Ł 1 j'1
2e a./ e j'0 ].t/
(1.324)
H0
D a.t/ cos.'0 '1 /
2
Hence we get
H02
Mxo D Ma cos2 .'0 '1 / (1.325)
4
In the same baseband equivalent scheme, we consider the noise weq at the output of filter
h; we find
Pweq . f / D 1
4 jH. f /j2 2N0 1. f C f 0 /
(1.326)
H02 f
D N0 rect
2 2Ba
Being now w WSS, w.bb/ is uncorrelated with w.bb/Ł and thus weq with weq
Ł . Then, from
and
H02
M w0 D N0 2Ba (1.329)
4
1.7. Second-order analysis of random processes 61
3o D 0 (1.331)
It is interesting to observe that, at the demodulator input, the ratio between the power of
the desired signal and the power of the noise in the passband of x is given by
Mx 0
3i D D (1.332)
.N0 =2/ 4Ba 2
For '1 D '0 then
3o D 23i (1.333)
In other words, measuring the noise power in a passband equal to that of the desired signal,
the DSB demodulator yields a gain of 2 in signal-to-noise ratio. We will now analyze the
case of a SSB signal x.t/ (see (1.313)), coherently demodulated, following the scheme of
Figure 1.35, where h P B is a filter used to eliminate the noise that otherwise, after the mixer,
would have fallen within the passband of the desired signal. The ideal frequency response
of h P B is given by
f f 0 Ba =2 f f 0 Ba =2
H P B . f / D rect C rect (1.334)
Ba Ba
Note that in this scheme we have assumed the phase of the receiver carrier equal to that of
the transmitter, to avoid distortion of the desired signal.
Being
f Ba =2
H.bb/
PB . f / D 2 rect (1.335)
Ba
the filter of the baseband-equivalent scheme is given by
h eq .t/ D 1
2 h .bb/
P B Ł h.t/ (1.336)
Figure 1.35. (a) Coherent SSB demodulator and (b) baseband-equivalent scheme.
62 Chapter 1. Elements of signal theory
H0
D Re[a.t/ C j a .h/ .t/] (1.338)
4
H0
D a.t/
4
In the baseband-equivalent scheme, the noise weq at the output of h eq has a PSD given by
Pweq . f / D jHeq . f /j2 2N0 1. f C f 0 /
1
4
(1.339)
N0 2 f Ba =2
D H0 rect
2 Ba
From the relation wo D weq;I and using (1.285), which is valid because weq ? weqŁ ,
we have
1 H2 f
Pwo . f / D [Pweq . f / C Pweq . f /] D 0 N0 rect (1.340)
4 8 2Ba
and
H2
Mwo D 0 N0 2Ba (1.341)
8
Then we obtain
.H02 =16/ Ma
3o D (1.342)
.H02 =8/ N0 2Ba
which using (1.316) and (1.321) can be written as
3o D 0 (1.343)
We note that the SSB system yields the same performance (for '1 D '0 ) as a DSB system,
even though half of the bandwidth is required. Finally, it results in
Mx
3i D D 3o (1.344)
.N0 =2/ 2Ba
Observation 1.6
We note that also for the simple examples considered in this section the desired signal is
analyzed via the various transformations, whereas the noise is analyzed via the PSD. As
a matter of fact, we are typically interested only in the statistical power of the noise at
the system output. The demodulated signal xo .t/, on the other hand, must be expressed as
the sum of a desired component proportional to a.t/ and an orthogonal component that
represents the distortion, which is, typically, small and has the same effects as noise.
In the previous example, the considered systems do not introduce any distortion since
xo .t/ is proportional to a.t/.
1.8. The autocorrelation matrix 63
Properties
1. R is Hermitian: R H D R.
For real random processes R is symmetric: RT D R.
2. R is a Toeplitz matrix, i.e. all elements along any diagonal are equal.
3. R is positive semi-definite and almost always positive definite. Indeed, taking an
arbitrary vector vT D [v0 ; : : : ; v N 1 ], and letting y D xT .k/v, we have
X
N X
1 N 1
E[jyj2 ] D E[v H xŁ .k/xT .k/v] D v H Rv D viŁ rx .i j/v j ½ 0 (1.347)
i D0 jD0
If v H Rv > 0, 8v, then R is said to be positive definite and all its principal minor
determinants are positive; in particular R is non-singular.
Eigenvalues
We indicate by det[R] the determinant of a matrix R. The eigenvalues of R are the solu-
tions ½i , i D 1; : : : ; N , of the characteristic equation of order N
det[R ½I] D 0 (1.348)
and the corresponding column eigenvectors ui satisfy the equation
Rui D ½i ui (1.349)
Example 1.8.1
Let fw.k/g be a white noise process. Its autocorrelation matrix R assumes the form
2 2 3
¦w 0 Ð Ð Ð 0
6 0 ¦w2 Ð Ð Ð 0 7
6 7
RD6 : : : : 7 (1.350)
4 :: :: : : :: 5
0 0 Ð Ð Ð ¦w2
64 Chapter 1. Elements of signal theory
Example 1.8.2
We define a complex-valued sinusoid as
Other properties
1. From Rm u D ½m u we obtain the relations of Table 1.7.
2. If the eigenvalues are distinct, then the eigenvectors are linearly independent:
X
N
ci ui 6D 0 (1.357)
i D1
for all combinations of fci g, i D 1; 2; : : : ; N , not all equal to zero. Therefore, in this
case, the eigenvectors form a basis in < N .
R Rm R1 I ¼R
3. The trace of a matrix R is defined as the sum of the elements of the main diagonal,
and we indicate it with tr[R]. It holds
X
N
tr R D ½i (1.358)
i D1
2. If the eigenvalues of R are distinct, then the eigenvectors are orthogonal. In fact,
from (1.349) one gets:
0 D .½ j ½i /uiH u j (1.363)
3. If the eigenvalues of R are distinct and their corresponding eigenvectors are normal-
ized, i.e.
(
2 H 1 iD j
jjui jj D ui ui D (1.364)
0 i 6D j
U D [u1 ; u2 ; : : : ; u N ] (1.365)
66 Chapter 1. Elements of signal theory
and
X
N
I ¼R D U.I ¼/U H D .1 ¼½i /ui uiH (1.370)
i D1
In fact, let Ui . f / be the Fourier transform of the sequence represented by the elements
of ui :
X
N
Ui . f / D u i;n e j2³ f nTc (1.372)
nD1
and using (1.248) and (1.372), the preceding equation can be written as
Z 1
X
N X
N
2Tc
uiH Rui D 1
Px . f / Ł
u i;n e j2³ f nTc u i;m e j2³ f mTc d f
2T nD1 mD1
c
(1.374)
Z 1
2Tc
D 1
Px . f / jUi . f /j2 d f
2T
c
1.9. Examples of random processes 67
Example 1.9.1
A r.v. with a Gaussian distribution can be generated from two r.v.s with uniform distribution
(see Appendix 1.B for an illustration of the method).
Example 1.9.2 Ð
Let xT D [x1 ; : : : ; x N ] be a real Gaussian random vector, xi 2 N mi ; ¦i2 . The joint
probability density function is
1 1 T C1 .ξ m /
px .ξ / D [.2³ / N det C N ] 2 e 2 .ξ mx / N x
(1.377)
Example 1.9.3
Let xT D [x1;I C j x1;Q ; : : : ; x N ;I C j x N ;Q ] be a complex-valued Gaussian random vector.
If the in-phase component xi;I and the quadrature component xi;Q are uncorrelated,
E[.xi;I mxi;I /.xi;Q mxi;Q /] D 0 i D 1; 2; : : : ; N (1.378)
and, moreover,
¦x2i;I D ¦x2i;Q D 12 ¦x2i (1.379)
68 Chapter 1. Elements of signal theory
with the vector of mean values and the covariance matrix given by
Example 1.9.4
Let xT D [x1 .t1 /; : : : ; x N .t N /] be a complex-valued Gaussian (vector) process, with each
element xi .ti / having real and imaginary components that are uncorrelated Gaussian r.v.s
with zero mean and equal variance for all values of ti . The vector x is called a circularly
symmetric Gaussian random process. The joint probability density function in this case
results in
H
C1 ξ
px .ξ / D [³ N det C]1 eξ (1.383)
Example 1.9.5
Let x.t/ D A sin.2³ f t C'/ be a real-valued sinusoidal signal with ' r.v. uniform in [0; 2³ /,
for which we will use the notation ' 2 U[0; 2³ /. The mean of x is
mx .t/ D E[x.t/]
Z 2³
1
D A sin.2³ f t C a/ da (1.384)
0 2³
D0
Example 1.9.6
Given N real-valued sinusoidal signals
X
N
x.t/ D Ai sin.2³ f i t C 'i / (1.386)
i D1
1.9. Examples of random processes 69
with f'i g statistically independent uniform r.v.s in [0; 2³ /, from Example 1.9.5 it is able
to obtain the mean value
X
N
mx .t/ D mxi .t/ D 0 (1.387)
i D1
X N
Ai2
rx .− / D cos.2³ f i − / (1.388)
i D1
2
We note that, according to the Definition 1.12, page 48, the process (1.386) is not asymp-
totically uncorrelated.
Example 1.9.7
Given N complex-valued sinusoidal signals
X
N
x.t/ D Ai e j .2³ fi tC'i / (1.389)
i D1
with f'i g statistically independent uniform r.v.s in [0; 2³ /, following a similar procedure
to that used in Examples 1.9.5 and 1.9.6, we find
X
N
rx .− / D jAi j2 e j2³ fi − (1.390)
i D1
Example 1.9.8
Let the discrete-time random process y.k/ D x.k/ C w.k/ be given by the sum of the
random process x.k/ of Example 1.9.7 and white noise w.k/ with variance ¦w2 . Moreover,
we assume fx.k/g and fw.k/g are uncorrelated. In this case
X
N
r y .n/ D jAi j2 e j2³ fi nTc C ¦w2 Žn (1.391)
i D1
Example 1.9.9
We consider a signal obtained by pulse-amplitude modulation (PAM), expressed as
X
C1
y.t/ D x.k/ h T x .t kT / (1.392)
kD1
70 Chapter 1. Elements of signal theory
x(k) y(t)
hTx
T
The signal y.t/ is the output of the system shown in Figure 1.36, where h T x is a finite-
energy pulse, and fx.k/g is a discrete-time (with T -spaced samples) WSS sequence, having
power spectral density Px . We note that Px . f / is a periodic function of period 1=T .
Let rh T x .− / be the deterministic autocorrelation of the signal h T x :
Z C1
rh T x .− / D h T x .t/h ŁT x .t − / dt D [h T x .t/ Ł h ŁT x .t/].− / (1.393)
1
2. Correlation
X
C1 X
C1
r y .t; t − / D rx .i/ h T x .t .i C m/ T /h ŁT x .t − mT / (1.395)
i D1 mD1
and
þ þ2
þ1 þ
PN y . f / D F[rN y .− /] D þþ HT x . f /þþ Px . f / (1.398)
T
we observe that the modulator of a PAM system may be regarded as an interpolator
filter with frequency response HT x =T .
3. Average power for a white noise input
For a white noise input with power Mx , from (1.397) the average statistical power of
the output signal is given by
Eh
MN y D Mx (1.399)
T
R C1
where E h D 1 jh T x .t/j2 dt is the energy of h T x .
1.9. Examples of random processes 71
Filtering the i.i.d. input signal fx.k/g by using the scheme depicted in Figure 1.36,
and observing the relation
X
C1 X
C1
r yy Ł .t; t − / D rx x Ł .i/ h T x .t .i C m/T /h T x .t − mT / (1.404)
i D1 mD1
then
rx x Ł .i/ D E[x 2 .k/]Ž.i/ D 0 (1.405)
and
r yy Ł .t; t − / D 0 (1.406)
that is y.t/ ? y Ł .t/. In particular we find that y.t/ is circularly symmetric, i.e.
E[y 2 .t/] D 0 (1.407)
We note that the condition (1.406) can be obtained assuming the less stringent con-
dition that x ? x Ł ; on the other hand, this requires that the following two conditions
are verified
rx I .i/ D rx Q .i/ (1.408)
and
Observation 1.7
It can be shown that if the filter h T x has a bandwidth smaller than 1=.2T / and x is a WSS
sequence, then y is WSS with spectral density given by (1.398).
72 Chapter 1. Elements of signal theory
Example 1.9.10
Let us consider a PAM signal sampled with period TQ D T =Q 0 , where Q 0 is a positive
integer number. Let
yq D y.q TQ / h p D h T x . p TQ / (1.410)
from (1.392) it follows
X
C1
yq D x.k/ h qk Q 0 (1.411)
kD1
If Q 0 6D 1, (1.411) describes the input–output relation of an interpolator filter (see (1.609)).
We recall the statistical analysis given in Table 1.6, page 52. We denote with H. f / the
Fourier transform (see (1.84)) and with rh .n/ the deterministic autocorrelation (see (1.260))
of the sequence fh p g. We also assume that fx.k/g is a WSS random sequence with mean mx
and autocorrelation rx .n/. In general, fyq g is a cyclostationary random sequence of period
Q 0 with
1. Mean
X
C1
m y .q/ D mx h qk Q 0 (1.412)
kD1
2. Correlation
X
C1 X
C1
r y .q; q n/ D rx .i/ Ł
h q.i Cm/Q 0 h qnm Q0 (1.413)
i D1 mD1
P
where E h D C1 2 N
pD1 jh p j is the energy of fh p g. We point out that the condition M y D Mx
is satisfied if the energy of the filter impulse response is equal to the interpolation factor Q 0 .
t0
x(t)=g(t)+w(t) y(t) y (t0 ) = gu (t0 ) + wu (t0 )
gM
G* (f)
GM (f) = K e -j2π ft0
Pw (f)
Then
þZ þ2
þ C1 þ
þ G M . f /G. f /e j2³ f t0
d f þþ
jgu .t0 /j2 þ
1
D Z C1
rwu .0/
Pw . f /jG M . f /j2 d f
1
(1.427)
þZ þ2
þ C1 p G. f / j2³ f t0 þþ
þ G M . f / Pw . f / p e dfþ
þ Pw . f /
1
D Z C1
Pw . f /jG M . f /j2 d f
1
p
where the integrand at the numerator was divided and multiplied by Pw . f /. Implicitly
it is assumed that Pw . f / 6D 0. Applying the Schwarz inequality (see Section 1.1) to the
functions
p
G M . f / Pw . f / (1.428)
and
G Ł . f / j2³ f t0
p e (1.429)
Pw . f /
it turns out
jgu .t0 /j2
Z C1 þþ þ2 Z C1 þþ þ2
þ pG. f / e j2³ f t0 þ d f D þ pG. f / þ d f
þ þ
þ P .f/ þ þ P . f /þ (1.430)
rwu .0/ 1 w 1 w
Therefore the maximum value is equal to the right-hand side of (1.430) and is achieved for
p G Ł . f / j2³ f t0
G M . f / Pw . f / D K p e (1.431)
Pw . f /
where K is a constant. From (1.431) the solution (1.425) follows immediately.
from which comes the name of matched filter (MF), i.e. matched to the input signal pulse.
The desired signal pulse at the filter output has the frequency response
t0
x(t)=g(t)+w(t) y(t)=Krg (t - t0 ) + wu (t) y(t0 )
gM
gM (t)=Kg*(t0 -t)
Figure 1.38. Matched filter for an input pulse in the presence of white noise.
gu .t/ D K rg .t t0 / (1.436)
If E g is the energy of g, using the relation E g D rg .0/ the maximum of the functional
(1.424) becomes
In Figure 1.39 the different pulse shapes are illustrated for a signal pulse g with limited
duration tg . Note that in this case the matched filter has also limited duration and it is causal
if t0 ½ tg .
with
−
j− j
rg .− / D T 1 rect (1.439)
T 2T
g(t)
0 tg t
gM (t)
t0 = 0
-tg 0 t
gM (t)
t0 = t g
0 tg t
r g (t)
-tg 0 tg t
18 The limit is meant in the mean-square sense, that is the variance of the r.v. 1 P K 1 x.k/ m
K kD0 x vanishes
for K ! 1.
1.11. Ergodic random processes 77
then x is said to be ergodic in the mean. In other words, for a process for which the above
limit is true, the time-average of samples tends to the statistical mean as the number of
samples increases. We note that the existence of the limit (1.442) implies the condition
2þ þ2 3
þ 1 KX
1 þ
þ þ
lim E 4þ x.k/ mx þ 5 D 0 (1.443)
K !1 þ K kD0 þ
or equivalently
X
K 1 ½
1 jnj
lim 1 cx .n/ D 0 (1.444)
K !1 K K
nD.K 1/
From (1.444) we see that for a random process to be ergodic in the mean, some conditions
on the second-order statistics must be verified. Analogously to definition (1.442), we say
that x is ergodic in correlation if in the limits it holds:
1 KX
1
lim x.k/x Ł .k n/ D E[x.k/x Ł .k n/] D rx .n/ (1.445)
K !1 K kD0
Also for processes that are ergodic in correlation one could get a condition of ergodicity
similar to that expressed by the limit (1.444). Let y.k/ D x.k/x Ł .k n/. Observing (1.445)
and (1.442), we find that the ergodicity in correlation of the process x is equivalent to
the ergodicity in the mean of the process y. Therefore it is easy to deduce that the con-
dition (1.444) for y translates into a condition on the statistical moments of the fourth
order for x.
In practice, we will assume all stationary processes to be ergodic; ergodicity is, how-
ever, difficult to prove for non-Gaussian random processes. We will not consider particular
processes that are not ergodic such as x.k/ D A, where A is a random variable, or x.k/
equal to the sum of sinusoidal signals (see (1.386)).
The property of ergodicity assumes a fundamental importance if we observe that from a
single realization it is possible to obtain an estimate of the autocorrelation function and from
this, the power spectral density. Alternatively, one could prove that under the hypothesis19
X
C1
jnj rx .n/ < 1 (1.446)
nD1
the following limit holds:
2 þ þ2 3
1 þ KX1 þ
þ þ
lim E 4 þTc x.k/ e j2³ f kTc þ 5 D Px . f / (1.447)
K !1 K Tc þ kD0 þ
Then, exploiting the ergodicity of a WSS random process, one obtains the relations
among the process itself, its autocorrelation function and power spectral density shown
19 We note that for random processes with non-zero mean and/or sinusoidal components this property is not
verified. Therefore it is usually recommended that the deterministic components of the process be removed
before the spectral estimation is performed.
78 Chapter 1. Elements of signal theory
Figure 1.40. Relation between ergodic processes and their statistical description.
in Figure 1.40. We note how the direct computation of the PSD, given by (1.447), makes
use of a statistical ensemble of the Fourier transform of process x, while the indirect method
via ACS makes use of a single realization.
If we let
XQK Tc . f / D Tc F[x.k/ w K .k/] (1.448)
where w K is the rectangular window of length K (see (1.474)) and Td D K Tc , (1.447)
becomes
E[jXQTd . f /j2 ]
Px . f / D lim (1.449)
Td !1 Td
The relation (1.449) holds also for continuous-time ergodic random processes, where
XQTd . f / denotes the Fourier transform of the windowed realization of the process, with a
rectangular window of duration Td .
1 KX
1
mO y D y.k/ (1.450)
K kD0
1.11. Ergodic random processes 79
In fact, (1.450) attempts to determine the average component of the signal fy.k/g. As
illustrated in Figure 1.41a, in general we can think of extracting the average component of
fy.k/g using an LPF filter h having unit gain, i.e. H.0/ D 1, and suitable bandwidth B.
Let K be the length of the impulse response with support from k D 0 to k D K 1. Note
that for a unit step input signal the transient part of the output signal will last K 1 time
instants. Therefore we assume
mO y D z.k/ D h Ł y.k/ for k ½ K 1 (1.451)
We now compute mean and variance of the estimate. From (1.451), the mean value is
given by
E[mO y ] D m y H.0/ D m y (1.452)
0.035
0.03
0.025
0.02
h(k)
0.015
0.01
0.005
0 5 10 15 20 25 30 35 40
k
(a) (b)
1.2
0.8
0.6
|H(f)|
0.4
0.2
−0.2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
fT
c
(c)
Figure 1.41. (a) Time average as output of a narrow band lowpass filter. (b) Typical impulse
responses: exponential filter with parameter a D 125 and rectangular window with K D 33.
(c) Corresponding frequency responses.
80 Chapter 1. Elements of signal theory
as H.0/ D 1. Using the expression in Table 1.6 of the correlation of a filter output signal
given the input, the variance of the estimate is given by
X
C1
var[mO y ] D ¦ y2 D rh .n/c y .n/ (1.453)
nD1
Assuming
X
C1 X
C1
jc y .n/j
SD jc y .n/j D ¦ y2 <1 (1.454)
nD1 nD1 ¦ y2
var[mO y ] E h S (1.455)
where E h D rh .0/.
For an ideal lowpass filter
f 1
H. f / D rect jfj < (1.456)
2B 2Tc
assuming as filter length K that of the principal lobe of fh.k/g, and neglecting a delay
factor, it results in E h D 2B and K ' 1=B. Introducing the criterion that for a good
estimate it must be
Rectangular window
8
< 1
k D 0; 1; : : : ; K 1
h.k/ D K (1.460)
:0 elsewhere
1.11. Ergodic random processes 81
Exponential filter
(
.1 a/a k k½0
h.k/ D (1.464)
0 elsewhere
with jaj < 1. The frequency response is given by
1a
H. f / D (1.465)
1 ae j2³ f Tc
Moreover, E h D .1 a/=.1 C a/ and, adopting as length of h the time constant of the
filter, i.e. the interval it takes for the amplitude of the impulse response to decrease of a
factor e,
1 1
K 1D ' (1.466)
ln 1=a 1a
where the approximation holds for a ' 1. The 3 dB filter bandwidth is equal to
1a 1
BD for a > 0:9 (1.467)
2³ Tc
The filter output has a simple expression given by the recursive equation
z.k/ D az.k 1/ C .1 a/ y.k/ (1.468)
We note that choosing a as
a D 1 2l (1.469)
the expression (1.468) becomes
z.k/ D z.k 1/ C 2 l .y.k/ z.k 1// (1.470)
whose computation requires only two additions and one shift of l bits. Moreover, from
(1.466), the filter time constant is given by
K 1 D 2l (1.471)
82 Chapter 1. Elements of signal theory
General window
In addition to the two filters described above, a general window can be defined as
h.k/ D Aw.k/ (1.472)
with fw.k/g window20
of length K . The factor A in (1.472) is introduced to normalize the
area of h to 1. We note that, for random processes with slowly time-varying statistics, the
equations (1.463) and (1.470) give an expression to update the estimates.
Unbiased estimate
1 KX 1
rO x .n/ D x.k/x Ł .k n/ n D 0; 1; : : : ; K 1 (1.478)
K n kDn
3. Hann window
8 0
> D1 1
>
> k
< B 2 C
0:50 C 0:50 cos @2³ A k D 0; 1; : : : ; D 1
w.k/ D D1 (1.476)
>
>
>
:
0 elsewhere
Biased estimate
1 KX
1
jnj
rL x .n/ D x.k/x Ł .k n/ D 1 rO x .n/ (1.482)
K kDn K
The mean value of the biased estimate satisfies the following relations:
jnj
E[rL x .n/] D 1 rx .n/ ! rx .n/ (1.483)
K K !1
Unlike the unbiased estimate, the mean of the biased estimate is not equal to the autocor-
relation function, but approaches it as K increases. Note that the biased estimate differs
from the autocorrelation function by one additive constant, denoted as BIAS :
BIAS D E[rL x .n/] rx .n/ (1.484)
For a Gaussian process, the variance of the biased estimate is expressed as
K jnj 2 1 XC1
var[rL x .n/] D var[rO x .n/] ' [r2 .m/ C rx .m C n/rx .m n/]
K K mD1 x
(1.485)
In general, the biased estimate of the ACS exhibits a mean-square error21 larger than the
unbiased, especially for large values of n. It should also be noted that the estimate does
not necessarily yield sequences that satisfy the properties of autocorrelation functions: for
example, the following property may not be verified:
rO x .0/ ½ jrO x .n/j n 6D 0 (1.487)
21 For example, for the estimator (1.478) the mean-square error is defined as
1 KX
1
MO x D jx.k/j2
K kD0
(1.488)
Z 1
1 2Tc
D 1
jXQ . f /j2 d f
K Tc 2T
c
using the properties of the Fourier transform (Parseval theorem). Based on (1.488), a PSD
estimator called a periodogram is given by
1 Q
PPER . f / D jX . f /j2 (1.489)
K Tc
We can write (1.489) as
X
K 1
PPER . f / D Tc rL x .n/ e j2³ f nTc (1.490)
nD.K 1/
and, consequently,
X
K 1
E[PPER . f /] D Tc E[rL x .n/]e j2³ f nTc
nD.K 1/
X
K 1 (1.491)
jnj
D Tc 1 rx .n/e j2³ f nTc
nD.K 1/
K
D Tc W B Ł Px . f /
where W B . f / is the Fourier transform of the Bartlett window
( jnj
1 jnj K 1
w B .n/ D K (1.492)
0 jnj > K 1
and
½2
1 sin.³ f K Tc /
WB . f / D (1.493)
K sin.³ f Tc /
We note the periodogram estimate is affected by BIAS for finite K . Moreover, it also
exhibits a large variance, as PPER . f / is computed using the samples of rL x .n/ even for lags
up to K 1, whose variance is very large.
1.11. Ergodic random processes 85
Welch periodogram
This method is based on applying (1.447) for finite K . Given a sequence of K samples,
different subsequences of consecutive D samples are extracted. Subsequences may partially
overlap. Let x .s/ be the s-th subsequence, characterized by S samples in common with the
preceding subsequence x .s1/ and with the following one x .sC1/ . In general, 0 S D=2,
with the choice S D 0 yielding subsequences with no overlap and therefore with less
correlation. The number of subsequences Ns is22
¼ ¹
KD
Ns D C1 (1.494)
DS
Let w be a window (see footnote 20 on page 82) of D samples: then
and obtain
1 þ þ2
.s/ þ Q .s/ þ
PPER .f/ D þX . f / þ (1.497)
DTc Mw
where
X
1 D1
Mw D w2 .k/ (1.498)
D kD0
is the normalized energy of the window. As a last step, for each frequency, average the
periodograms:
1 NXs 1
PWE . f / D P .s/ . f / (1.499)
Ns sD0 PER
where
X
D1
W. f / D w.k/e j2³ f kTc (1.501)
kD0
22 The symbol bac denotes the function floor, that is the largest integer smaller than or equal to a. The symbol
dae denotes the function ceiling, that is the smallest integer larger than or equal to a.
86 Chapter 1. Elements of signal theory
Assuming the process is Gaussian and the different subsequences are statistically indepen-
dent, we get23
1 2
var[PWE . f /] / P .f/ (1.502)
Ns x
Note that the partial overlap introduces correlation between subsequences. From (1.502), we
see that the variance of the estimate is reduced by increasing the number of subsequences. In
general, D must be large enough so that the generic subsequence represents the process and
also Ns must be large to obtain a reliable estimate (see (1.502)); therefore the application
of the Welch method requires many samples.
E[PBT . f /] D Tc W. f / Ł Px . f / (1.504)
For a Gaussian process, if the Bartlett window is chosen, the variance of the estimate is
given by
1 2 2L 2
var[PBT . f /] D Px . f /E w D P .f/ (1.505)
K 3K x
The choice of the window length is based on the compromise between spectral resolution
and the variance of the estimate. An example has already been seen in the correlogram,
where the condition L K =5 must be satisfied. Another example is the Welch periodogram.
For a given observation of K samples, it is initially better to choose a small number of
samples over which to perform the DFT, and therefore a large number of windows (subse-
quences) over which to average the estimate. The estimate is then repeated by increasing
the number of samples per window, thus decreasing the number of windows. In this way
we get estimates with a higher resolution, but also characterized by an increasing variance.
The procedure is terminated once it is found that the increase in variance is no longer com-
pensated by an increase in the spectral resolution. The aforementioned method is called
window closing.
Example 1.11.1
Consider a realization of K D 10000 samples of the signal:
1 X 16
y.kTc / D h.nTc /w..k n/Tc / C A1 cos.2³ f 1 kTc C '1 /
Ah nD16
C A2 cos.2³ f 2 kTc C '2 /; (1.506)
where '1 ; '2 2 U[0; 2³ /, w.nTc / is a white random process with zero mean and variance
¦w2 D 5, Tc D 0:2, A1 D 1=20, f 1 D 1:5, A2 D 1=40, f 2 D 1:75, and
X
16
Ah D h.kTc / (1.507)
16
Moreover
kTc kTc kTc
sin ³.1 ²/ C 4² cos ³.1 C ²/
T T T kTc
h.kTc / D " # rect (1.508)
kTc 2 kTc 8T C Tc
³ 1 4²
T T
Figure 1.42. Comparison between spectral estimates obtained with Welch periodogram
method, using the Hamming or the rectangular window, and the analytical PSD given
by (1.509).
frequency resolution Fq . Consequently, a Dirac impulse, for example, of area A21 =4 will
have a height equal to A21 =.4Fq /, thus maintaining the equivalence in statistical power
between different representations.
We now compare several spectral estimates, obtained using the previously described
methods; in particular we will emphasize the effect on the resolution of the type of window
used and the number of samples for each window.
We state beforehand the following result. Windowing a complex sinusoidal signal
fe j2³ f 1 kTc g with fw.k/g produces a signal having Fourier transform equal to W. f f 1 /,
where W. f / is the Fourier transform of w. Therefore, in the frequency domain the spectral
line of a sinusoidal signal becomes a signal with shape W. f / centered around f 1 .
In general, from (1.497), the periodogram of a real sinusoidal signal with amplitude A1
and frequency f 1 is
2
Tc A1
PPER . f / D jW. f f 1 / C W. f C f 1 /j2 (1.510)
DMw 2
Figure 1.42 shows, in addition to the analytical PSD (1.509), the estimate obtained by the
Welch periodogram method using the Hamming or the rectangular windows. Parameters
used in (1.496) and (1.499) are: D D 1000, Ns D 19 and 50% overlap between windows.
We observe that the use of the Hamming window yields an improvement of the estimate
due to less leakage. Likewise Figure 1.43 shows how the Hamming window also improves
the estimate carried out with the correlogram; in particular, the estimates of Figure 1.43
were obtained using in (1.503) L D 500. Finally, Figure 1.44 shows how the resolution and
1.11. Ergodic random processes 89
Figure 1.43. Comparison between spectral estimates obtained with the correlogram using
the Hamming or the rectangular window, and the analytical PSD given by (1.509).
Figure 1.44. Comparison of spectral estimates obtained with the Welch periodogram method,
using the Hamming window, by varying parameters D ed Ns .
90 Chapter 1. Elements of signal theory
the variance of the estimate obtained by the Welch periodogram vary with the parameters
D and Ns , using the Hamming window. Note that by increasing D, and hence decreasing
Ns , both resolution and variance of the estimate increase.
p
X q
X
x.k/ D an x.k n/ C bn w.k n/ (1.511)
nD1 nD0
Rewriting (1.511) in terms of the input–output relation of the linear system, from (1.129)
we find in general
X
C1
x.k/ D h ARMA .n/w.k n/ (1.512)
nD0
b0 b1 bq
+
x(k)
ap a2 a1
Tc Tc Tc
x(k-p) x(k-2) x(k-1)
25 In a simulation of the process, the first samples x.k/ generated by (1.511) should be ignored because they
depend on the initial conditions.
1.12. Parametric models of random processes 91
which indicates that the filter used to realize the ARMA model is causal. From (1.129) one
finds that the filter transfer function is given by
8 q
> X
>
> B.z/ D bn z n
>
<
B.z/ nD0
HARMA .z/ D where (1.513)
A.z/ >
> p
X
>
> an z n
: A.z/ D assuming a0 D 1
nD0
Using (1.264), the power spectral density of the process x is given by:
þ þ2 (
2 þ B. f / þ
þ þ B. f / D B.e j2³ f Tc /
Px . f / D Tc ¦w þ where (1.514)
A. f / þ A. f / D A.e j2³ f Tc /
MA(q) model
If we particularize the ARMA model, assuming
ai D 0 i D 1; 2; : : : ; p (1.515)
or A.z/ D 1, we get the moving average model of order q. The equations of the ARMA
model therefore are reduced to
and
If we represent the function Px . f / of a process obtained by the MA model, we see that its
behavior is generally characterized by wide “peaks” and narrow “valleys”, as illustrated in
Figure 1.46.
AR(N) model
The auto-regressive model of order N is shown in Figure 1.47. The output process is
described in this case by the recursive equation
X
N
x.k/ D an x.k n/ C w.k/ (1.518)
nD1
where w is white noise with variance ¦w2 . The transfer function is given by
1
HAR .z/ D (1.519)
A.z/
92 Chapter 1. Elements of signal theory
w(k) + x(k)
-
aN a2 a1
Tc Tc Tc
with
X
N
A.z/ D 1 C an z n (1.520)
nD1
We observe that (1.519) describes a filter having N poles. Therefore HAR .z/ can be ex-
pressed as
1
HAR .z/ D (1.521)
.1 p1 z 1 /.1 p2 z 1 / Ð Ð Ð .1 p N z 1 /
For a causal filter, the stability condition is jpi j < 1, i D 1; 2; : : : ; N , i.e. all poles must
be inside the unit circle of the z plane.
1.12. Parametric models of random processes 93
In the case of the AR model, from Table 1.6 the z-transform of the ACS of x is given by
1 ¦w2
Px .z/ D Pw .z/ D (1.522)
1 1
A.z/A Ł A.z/A Ł Ł
zŁ z
1 j'i
jpi je j'i and e (1.523)
jpi j
Letting
Tc ¦w2
Px . f / D (1.525)
jA. f /j2
Typically, the function Px . f / of an AR process will have narrow “peaks” and wide “val-
leys” (see Figure 1.48), reciprocal to the MA model.
¦w2
Px .z/ D (1.526)
.1 jp1 je j'1 z 1 / Ð Ð Ð .1 jp N je j' N z 1 /
1 j'1 1 1 j' N 1
1 e z ÐÐÐ 1 e z
jp1 j jp N j
For a given Px .z/, it is clear that the N zeros of A.z/ in (1.522) can be chosen in 2 N
different ways. The selection of the zeros of A.z/ is called spectral factorization. Two
examples are illustrated in Figure 1.49.
As stated by the spectral factorization theorem (see page 53) there exists a unique spectral
factorization that yields a minimum-phase A.z/, which is obtained by associating with A.z/
only the poles of Px .z/ that lie inside the unit circle of the z-plane.
Whitening filter
We observe an important property illustrated in Figure 1.50. Suppose x is modeled as an
AR process of order N and has PSD given by (1.522). If x is input to a filter having transfer
function A.z/, the output of this latter filter would be white noise. In this case the filter
A.z/ is called whitening filter (WF).
If A.z/ is minimum phase, the white process w is also called innovation of the pro-
cess x, in the sense that the new information associated with the sample x.k/ is carried
only by w.k/.
where s and x are uncorrelated processes. The process s, called predictable process, is
described by the recursive equation
X
C1
s.k/ D Þn s.k n/ (1.528)
nD1
Figure 1.49. Two examples of possible choices of the zeros (ð) of A.z/, among the poles of
Px .z/.
96 Chapter 1. Elements of signal theory
Therefore any one of the three descriptions (ARMA, MA, or AR) can be adopted to
approximate the spectrum of a process, provided that the order is sufficiently high.
We observe that, for n > 0, rx .n/ satisfies an equation analogous to the (1.518), with the
exception of the component w.k/. This implies that, if f pi g are zeros of A.z/, r x can be
written as
X N
rx .n/ D ci pin n>0 (1.534)
i D1
Assuming an AR process with j pi j < 1, for i D 1; 2; : : : ; N , we get:
rx .n/ ! 0 (1.535)
n!1
Simplifying notation rx .n/ with r.n/, and observing (1.533), for n D 1; 2; : : : ; N , one gets
a set of equations that in matrix notation are expressed as
2 3 2 a1 3 2
r.1/
3
r.0/ r.1/ Ð Ð Ð r.N C 1/
6 r.1/ 6 a 7 6 r.2/ 7
6 r.0/ Ð Ð Ð r.N C 2/ 7 76 2 7 6 7
6 :: :: : 7 6 7 6 7 (1.536)
:: 6 :
5 4 :: 5 7 D 6 : 7
4 : : ÐÐÐ 4 :: 5
r.N 1/ r.N 2/ Ð Ð Ð r.0/ aN r.N /
that is
Ra D r (1.537)
with obvious definition of the vectors. In the hypothesis that the matrix R has an inverse,
the solution for the coefficients fai g is given by
a D R1 r (1.538)
Equations (1.537) and (1.538), called Yule–Walker equations, allow us to obtain the coeffi-
cients of an AR model for a process having autocorrelation function rx . The variance ¦w2
of white noise at the input can be obtained from (1.533) for n D 0, which yields
XN
¦w2 D rx .0/ C am rx .m/
mD1 (1.539)
D rx .0/ C r H a
Observation 1.8
ž From (1.537) one finds that a does not depend on rx .0/, but only on the correlation
coefficients
rx .n/
²x .n/ D n D 1; : : : ; N (1.540)
rx .0/
ž Exploiting the fact that R is Toeplitz and Hermitian, the set of equations (1.538) and
(1.539) can be numerically solved by the Levinson–Durbin or by Delsarte–Genin
algorithms, with a computational complexity proportional to N 2 (see Sections 2.2.1
and 2.2.2).
ž We note that the knowledge of rx .0/; rx .1/; : : : ; r x .N / univocally determines the
ACS of an AR.N / process; for n > N , from (1.533), we get
XN
rx .n/ D am rx .n m/ (1.541)
mD1
98 Chapter 1. Elements of signal theory
where rO x .n/ is estimated for jnj N with one of the two methods of Section 1.11.2, while
for jnj > N the recursive equation (1.541) is used. The AR model accurately estimates
processes with a spectrum similar to that given in Figure 1.48. For example, a spectral
estimate for the process of Example 1.11.1 on page 87 obtained by an AR(12) model is
depicted in Figure 1.51. The correlation coefficients were obtained by a biased estimate on
10000 samples. Note that the continuous part of the spectrum is estimated only approxi-
mately; on the other hand, the choice of a larger order N would result in an estimate with
larger variance.
Figure 1.51. Comparison between the spectral estimate obtained by an AR(12) process
model and the analytical PSD given by (1.509).
1.12. Parametric models of random processes 99
Also note that the presence of spectral lines in the original process leads to zeros of the
polynomial A.z/ near the unit circle (see page 101). In practice, the correlation estimation
method and a choice of a large N may result in an ill-conditioned matrix R. In this case the
solution may have poles outside the unit circle, and hence the system would be unstable.
AR(1). From
(
rx .n/ D a1 rx .n 1/ n>0
(1.544)
¦w2 D rx .0/ C a1 rx .1/
we obtain
¦w2
rAR.1/ .n/ D .a1 /jnj (1.545)
1 ja1 j2
from which the spectral density is
Tc ¦w2
PAR.1/ . f / D (1.546)
j1 C a1 e j2³ f Tc j2
The behavior of the spectral density of an AR(1) process is illustrated in Figure 1.52.
AR(2). Let p1;2 D %eš j'0 , where '0 D 2³ f 0 Tc , be the two complex roots of A.z/ D
1 C a1 z 1 C a2 z 2 . We consider a real process:
(
a1 D 2% cos.2³ f 0 Tc /
(1.547)
a2 D % 2
Letting
½
1 %2
# D tan1 tan1 .2³ f 0 Tc / (1.548)
1 C %2
we find
s
2
1 C %2 1 %2 ð 1 Ł2
1 C tan .2³ f 0 Tc /
1% 2 1C% 2
rAR.2/ .n/ D ¦w2 %jnj cos.2³ f 0 jnjTc #/
1 %2 cos2 .4³ f 0 Tc / C %4
(1.549)
The spectral density is thus given by
Tc ¦w2
PAR.2/ . f / D þ þ þ þ (1.550)
þ1 %e j2³. f f 0 /Tc þ2 þ1 %e j2³. f C f 0 /Tc þ2
Solving the previous set of equations with respect to rx .0/, rx .1/ and rx .2/, one obtains
8
> 1 C a2 ¦w2
>
> r .0/ D
>
> x
1 a2 .1 C a2 /2 a21
>
>
>
< a1
rx .1/ D rx .0/ (1.552)
> 1 C a2
>
> !
>
> 2
>
> r .2/ D a C a1
>
: x 2 rx .0/
1 C a2
with ' 2 U[0; 2³ /. We observe that the process described by (1.554) satisfies the following
difference equation for k ½ 0:
with x.2/ D x.1/ D 0. We note that the Kronecker impulses determine only the
amplitude and phase of x.
In the z-domain, we get the homogeneous equation
It is important to verify that these zeros belong to the unit circle of the z plane. Consequently
the representation of a sinusoidal process via the AR model is not possible, as the stability
102 Chapter 1. Elements of signal theory
condition, j pi j < 1, is not satisfied. Moreover the input (1.555) is not white noise. In
any case, we can try to find an approximation. In the hypothesis of uniform ', from
Example 1.9.5,
A2
rx .n/ D cos.2³ f 0 nTc / (1.558)
2
2 3
2
6 ¦w2 7
1 %2
rAR.2/ .n/ ' 6 7
4 2 %2 cos2 .4³ f 0 Tc / 5 cos.2³ f 0 nTc / (1.559)
2
¦w2
1 %2 A2
lim D (1.560)
%!1;¦w2 !0 2 %2 cos2 .4³ f 0 Tc / 2
Observation 1.9
We can observe the following facts about the order of an AR model approximating a
sinusoidal process.
ž From (1.390) one finds that an AR process of order N is required to model N complex
sinusoids; on the other hand, from (1.388), one sees that an AR process of order 2N
is required to model N real sinusoids.
ž An ARMA process of order .2N ; 2N / is required to model N real sinusoids plus white
noise having variance ¦b2 . Observing (1.513), it results ¦w2 ! ¦b2 and B.z/ ! A.z/.
A better estimate is obtained by separating the continuous part from the spectral lines, for
example by the scheme illustrated in Figure 3.38. The two components are then estimated
by different methods.
Bibliography
[1] A. Papoulis, Probability, random variables and stochastic processes. New York:
McGraw-Hill, 3rd ed., 1991.
[2] M. B. Priestley, Spectral analysis and time series. New York, NY: Academic Press,
1981.
[3] A. Papoulis, Signal analysis. New York: McGraw-Hill, 1984.
[4] S. Benedetto and E. Biglieri, Principles of digital transmission with wireless applica-
tions. New York: Kluwer Academic Publishers, 1999.
[5] D. G. Messerschmitt and E. A. Lee, Digital communication. Boston, MA: Kluwer
Academic Publishers, 2nd ed., 1994.
[6] J. G. Proakis, Digital communications. New York: McGraw-Hill, 3rd ed., 1995.
[7] G. Cariolaro, La teoria unificata dei segnali. Turin: UTET, 1996.
[8] A. Papoulis, The Fourier integral and its applications. New York: McGraw-Hill, 1962.
[9] A. V. Oppenheim and R. W. Schafer, Discrete-time signal processing. Englewood
Cliffs, NJ: Prentice-Hall, 1989.
[10] P. P. Vaidyanathan, Multirate systems and filter banks. Englewood Cliffs, NJ: Prentice-
Hall, 1993.
[11] R. E. Crochiere and L. R. Rabiner, Multirate digital signal processing. Englewood
Cliffs, NJ: Prentice-Hall, 1983.
[12] J. W. B. Davenport and W. L. Root, An introduction to the theory of random signals
and noise. New York: IEEE Press, 1987.
[13] A. N. Shiryayev, Probability. New York: Springer–Verlang, 1984.
[14] S. M. Kay, Modern spectral estimation-theory and applications. Englewood Cliffs,
NJ: Prentice-Hall, 1988.
[15] L. S. Marple Jr., Digital spectral analysis with applications. Englewood Cliffs, NJ:
Prentice-Hall, 1987.
[16] P. Stoica and R. Moses, Introduction to spectral analysis. Englewood Cliffs, NJ:
Prentice-Hall, 1997.
[17] L. Erup, F. M. Gardner, and R. A. Harris, “Interpolation in digital modems—Part II:
implementation and performance”, IEEE Trans. on Communications, vol. 41, pp. 998–
1008, June 1993.
[18] H. Meyr, M. Moeneclaey, and S. A. Fechtel, Digital communication receivers. New
York: John Wiley & Sons, 1998.
104 Chapter 1. Elements of signal theory
1.A.1 Fundamentals
We consider the discrete-time linear transformation of Figure 1.54, with impulse response
h.t/, t 2 <; the sampling period of the input signal is Tc , whereas that of the output
signal is Tc0 .
The input–output relation is given by the equation
X
C1
y.kTc0 / D h.kTc0 nTc /x.nTc / (1.561)
nD1
We will use the following simplified notation:
xn D x.nTc / (1.562)
yk D y.kTc0 / (1.563)
If we assume that h has a finite support, say, between t1 and t2 , that is h.kTc0 nTc / 6D 0 for
xn yk
h
Tc T’
c
ž the limits of the summation (1.568) are a complicated function of Tc , Tc0 , t1 , and t2 .
and setting
¼ 0¹
kTc0 kTc
1k D (1.570)
Tc Tc
¾ ³ ¾ ³ ¼ 0¹
t1 t1 kTc0 kTc
I1 D 1k D C (1.571)
Tc Tc Tc Tc
¾ ³ ¾ ³ ¼ ¹
t2 t2 kT 0 kTc0
I2 D 1k D c C (1.572)
Tc Tc Tc Tc
(1.568) becomes
I2
X
yk D h..i C 1k /Tc /xbkTc0 =Tc ci (1.573)
i DI1
From the definition (1.570) it is clear that 1k represents the truncation error of kTc0 =Tc and
that 0 1k < 1. In the special case
Tc0 M
D (1.574)
Tc L
We observe that 1k can assume L values f0; 1=L ; 2=L ; : : : ; .L 1/=Lg for any value
of k. Hence there are only L univocally determined sets of values of h that are used in
the computation of fyk g; in particular, if L D 1 only one set of coefficients exists, while
106 Chapter 1. Elements of signal theory
if M D 1 the sets are L. Summarizing, the output of a filter with impulse response h and
with different input and output time domains can be expressed as
X
C1
yk D gk;i xj k M k (1.576)
L i
i D1
where
We note that the system is linear and periodically time-varying. For Tc0 D Tc , that is for
L D M D 1, we get 1k D 0, and the input–output relation is the usual convolution
X
C1
yk D g0;i x ki (1.578)
i D1
1.A.2 Decimation
Figure 1.55 represents a decimator or downsampler, with the output sequence related to the
input sequence fxn g by
yk D x k M (1.579)
2³
where W M D e j M is defined in (1.92). Equivalently, in terms of the radian frequency
normalized by the sampling frequency, !0 D 2³ f =Fc0 , (1.580) can be written as
0 X j !0 2³ m
1 M1
Y .e j! / D X e M (1.581)
M mD0
xn yk
M
Tc T’c =MTc
1 Fc
Fc = F’c =
Tc M
Figure 1.56. Decimation by a factor M D 3: a) in the time domain, and b) in the normalized
radian frequency domain.
where
X . f / D X .e j2³ f Tc / (1.583)
Y. f / D Y .e j2³ f M Tc / (1.584)
The relation (1.582) for the signal of Figure 1.56 is represented in Figure 1.57. Note that the
only difference with respect to the previous representation is that all frequency responses
are now functions of the frequency f .
Hence we obtain
X X
1 M1 C1 X X
1 M1 C1 m Ðk
X 0 .z/ D km k
xk W M z D x k zW M (1.591)
M mD0 kD1 M mD0 kD1
1.A.3 Interpolation
Figure 1.58 represents an interpolator or upsampler, with the input sequence fxn g related
to the output sequence by
8
>
<x k k D 0; šL ; š2L ; : : :
yk D L (1.592)
>
: 0 otherwise
Y .z/ D X .z L / (1.593)
Y. f / D X . f / (1.595)
where
X . f / D X .e j2³ f Tc / (1.596)
Tc
Y. f / D Y e j2³ f L (1.597)
The (1.595) for the signal of Figure 1.59 is illustrated in Figure 1.60. We note that the only
effect of the interpolation is that the signal X must be regarded as periodic with period Fc0
rather than Fc .
xn yk
L
Tc Tc
T’c =
L
1
Fc = F’c =LF c
Tc
¼
¼
Figure 1.59. Interpolation by a factor L D 3: (a) in the time domain, (b) in the normalized
radian frequency domain.
´ µ
̽ ¼
½
Ì
¾
Ì
¿
Ì
´ µ
̽ ¼
½
Ì
¾
Ì
¿
Ì
¼
¼
and
X
C1
vn D h i xni (1.600)
i D1
0 X
1 M1 j
!0 2³ m !0 2³ m
Y .e j! / D H .e M /X .e j M / (1.604)
M mD0
If
( ³
1 j!j
H .e / D
j! M (1.605)
0 otherwise
we obtain
1 j!
0
0
Y .e j! / D X eM j!0 j ³ (1.606)
M
In this case h is a lowpass filter that avoids aliasing caused by sampling; if x is bandlimited,
the specifications of h can be made less stringent.
The decimator filter transformations are illustrated in Figure 1.62 for M D 4.
112 Chapter 1. Elements of signal theory
| X (f)|
0 Fc /2 Fc f
| H (f)|
0 Fc /2 Fc f
| V (f)|
0 Fc /2 Fc f
| Y (f)|
0 F’c /2 F’c Fc /2 Fc f
Figure 1.62. Frequency responses related to the transformations in a decimator filter for
M D 4.
X
C1
yk D h k j w j (1.607)
jD1
8
< k
x k D 0; šL ; : : :
wk D L (1.608)
:
0 otherwise
Therefore
X
C1
yk D h kr L xr (1.609)
r D1
1.A. Multirate systems 113
¼
¼
¼
W .z/ D X .z L / (1.611)
Y .z/ D H .z/W .z/ D H .z/X .z L / (1.612)
or, equivalently,
0 0 0
Y .e j! / D H .e j! /X .e j! L / (1.613)
where !0 D 2³ f T =L D !=L.
The interpolator filter transformations in the time and frequency domains are illustrated
in Figure 1.64 for L D 3.
If
( ³
j!0 1 j!0 j
H .e / D L (1.614)
0 elsewhere
we find
( 0 ³
j!0 X .e j! / j!0 j
Y .e /D L (1.615)
0 elsewhere
The relation between the input and output signal power for an interpolator filter is expressed
by (1.419).
´ µ
½
¼ ½ ¼ ¾
¼
¼
´ µ
½
¼½¾¿ ¼ ¾
¼
¼
´ µ
¼ ¾
¼
¼
´ µ
¼½¾¿ ¼ ¾
¼
¼
Figure 1.64. Time and frequency responses related to the transformations in an interpolator
filter for L D 3.
¼¼
¼
¼
¼¼
¼
This system can be thought of as the cascade of an interpolator and decimator filter, as
illustrated in Figure 1.66, where h D h 1 Ł h 2 . We obtain that
(
1 j! 0 j min ³ ; ³
0
H .e j! / D L M (1.616)
0 elsewhere
In the time domain the following relation holds:
X
C1
yk D gk;i xj k M k (1.617)
L i
i D1
where gk;i D h..i L C .k M/mod L /Tc0 / is the time-varying impulse response.
In the frequency domain we get
00 X
1 M1 !00 2³l
Y .e j! / D V .e j M / (1.618)
M lD0
As
0 0 0
V .e j! / D H .e j! /X .e j! L / (1.619)
we obtain
00 X
1 M1 !00 2³l !00 L2³l
Y .e j! / D H .e j M /X .e j M / (1.620)
M lD0
From (1.616) we have
8 j!00 L
>
< 1 X e M ³M
00 j!00 j min ³;
Y .e j! / D M L (1.621)
>
: 0 elsewhere
or
1 1 L
Y. f / D X . f / for j f j min ; (1.622)
M 2Tc 2M Tc
X(e j ω)
0 4.2 π ω = 2π f T
4Fc f
W(e j ω’)
0 π /L 2π ω’ = ω L
Fc /2 4Fc f
H(e j ω’)
0 π /M 2π ω’
LFc LFc f
V(e j ω’) 2M
0 π /M 2π ω’
LFc LFc f
Y(e j ω") 2M
M=5
0 π 5.2 π ω" = ω ’ M
LFc LFc f
2M
Linear interpolation
Given two samples yk1 and yk , the signal z.t/, limited to interval [.k 1/ Tc0 ; kTc0 ], is
obtained by the linear interpolation
t .k 1/Tc0
z.t/ D y k1 C .yk yk1 / (1.623)
Tc0
1.A. Multirate systems 117
X(e j ω)
0 π 2π 5.2 π ω = 2 π f Tc
Fc /2 Fc 5Fc f
W(e j ω’) L=5
0 π /5 2π ω’ = ω L
Fc /2 5Fc f
H(e j ω’)
0 π /5 2π ω’
Fc /2 LFc f
V(e j ω’)
0 π /5 2π ω’
Fc /2 LFc f
Y(e j ω")
.
0 M π π 2π 42π ω" = ω ’ M
L
MFc Fc MFc LFc f
2L 2 L
´µ
½
·½
¼ ¼ ¼ ¼
´ ¾µ ´ ½µ ´ · ½µ
Quadratic interpolation
In many applications linear interpolation does not always yield satisfactory results. There-
fore, instead of connecting two points with a straight line, one resorts to a polynomial of
degree Q 1 passing through Q points that are determined by the samples of the input
sequence. For this purpose the Lagrange interpolation is widely used. As an example we
report here the case of quadratic interpolation. In this case we consider a polynomial of
degree 2 that passes through 3 points that are determined by the input samples. Let yk1 , yk
and ykC1 be the samples to interpolate by a factor P in the interval [.k 1/Tc0 ; .k C 1/Tc0 ].
The quadratic interpolation yields the values
n0 n0 n0 n0 n0 n0
zn D 1 yk1 C 1 1C yk C C 1 ykC1 (1.627)
2P P P P 2P P
with n 0 D 0; 1; : : : ; P 1 and n D .k 1/ P C n 0 .
xn y1,k xn y2,k
M G(z) G(zM) M
(a) (b)
(c) (d)
Defining
X
1 X
1
E .0/ .z/ D h 2m z m E .1/ .z/ D h 2mC1 z m (1.631)
mD0 mD0
Letting
.`/
em D h m MC` 0` M 1 (1.634)
120 Chapter 1. Elements of signal theory
´¼µ
´½µ
´¾µ
where the components R .`/ .z/ are permutations of E .`/ .z/, that is R .`/ .z/ D E .M1`/ .z/.
Efficient implementations
The polyphase representation is the key to obtaining efficient implementation of decimator
and interpolator filters. In the following, we will first consider the efficient implementations
for M D 2 and L D 2, then we will extend the results to the general case.
1.A. Multirate systems 121
Figure 1.72. Implementation of a decimator filter using the type 1 polyphase representation
for M D 2.
Figure 1.73. Optimized implementation of a decimator filter using the type 1 polyphase
representation for M D 2.
In this implementation e .`/ requires N .`/ multiplications and N .`/ 1 additions; the total
cost is still N multiplications and N 1 additions, but, as e .`/ operates at half the input
rate, the computational complexity in terms of multiplications per second (MPS) is
N Fc
MPS D (1.638)
2
while the number of additions per second (APS) is given by
.N 1/Fc
APS D (1.639)
2
Therefore the complexity is about one half the complexity of the original filter. The efficient
implementation for the general case is obtained as an extension of the case for M D 2 and
is shown in Figure 1.74.
122 Chapter 1. Elements of signal theory
Figure 1.74. Implementation of a decimator filter using the type 1 polyphase representation.
´¼µ ¾
´½µ
¾
½
¾
¼
Figure 1.75. Implementation of an interpolator filter using the type 1 polyphase representation
for L D 2.
Interpolator filter. With reference to Figure 1.63, we consider an interpolator filter with
L D 2. By (1.635), we can represent H .z/ as illustrated in Figure 1.75; by the noble
identities, the filter representation can be drawn as in Figure 1.76a. The structure can be
also drawn as in Figure 1.76b, where output samples are alternately taken from the output
of the two filters e .0/ and e .1/ ; this latter operation is generally called parallel–to–serial
(P/S) conversion. Remarks on the computational complexity are analogous to those of the
decimator filter case.
In the general case, efficient implementations are easily obtainable as extensions of the
case for L D 2 and are shown in Figure 1.77. The type 2 polyphase implementations of
interpolator filters are depicted in Figure 1.78.
P/S
xn xn
E (0) (z) 2 E (0) (z)
Tc
yk
T’c = Tc
E (1) (z) 2 z -1 E (1) (z)
yk 2
(a) (b)
Figure 1.76. Optimized implementation of an interpolator filter using the type 1 polyphase
representation for L D 2.
Figure 1.77. Implementation of an interpolator filter using the type 1 polyphase representation.
The sequence fx.q TQ0 /g is then downsampled with timing phase t0 . Let yk be the output
with sampling period Tc ,
yk D x.kTc C t0 / (1.641)
with L and M positive integer numbers. Moreover, we assume that t0 is a multiple of TQ0 ,
t0
D `0 C L0 L (1.643)
TQ0
where `0 2 f0; 1; : : : ; L 1g, and L0 is a non-negative integer number. For the general
case of an interpolator-decimator filter where t0 and the ratio Tc =TQ0 are not constrained,
we refer to [18] (see also Chapter 14).
124 Chapter 1. Elements of signal theory
P/S
xn xn k=L-1
R(0)(z) L R(0)(z)
z-1
k=L-2
R(1)(z) L R(1)(z)
yk
z-1
yk k=0
R(L-1) (z) L R(L-1) (z)
(a) (b)
Figure 1.78. Implementation of an interpolator filter using the type 2 polyphase representation.
The interpolator filter structure from TQ to TQ0 is illustrated in Figure 1.77. For the special
case M D 1, that is for Tc D TQ , the implementation of the interpolator-decimator filter is
given in Figure 1.80, where
yk D vkCL0 (1.646)
1.A. Multirate systems 125
In other words, fyk g coincides with the signal fvn g at the output of branch `0 of the
polyphase structure. In practice we need to ignore the first L0 samples of fvn g, as the
relation between fvn g and fyk g must take into account a lead, z L0 , of L0 samples. With
reference to Figure 1.80, the output fxq g at instants that are multiples of TQ0 is given
by the outputs of the various polyphase branches in sequence. In fact, let q D ` C n L,
` D 0; 1; : : : ; L 1, and n integer, we have
We now consider the general case M 6D 1. First, to downsample the signal interpolated
at TQ0 one can still use the polyphase structure of Figure 1.80. In any case, once t0 is
chosen, the branch is identified (say `0 ) and its output must be downsampled by a factor
M L. Notice that there is the timing lead L0 L in (1.643) to be considered. Given L0 , we
determine a positive integer N0 so that L0 C N0 is a multiple of M, that is
L0 C N0 D M0 M (1.648)
The structure of Figure 1.80, considering only branch `0 , is equivalent to that given in
Figure 1.81a, in which we have introduced a lag of N0 samples on the sequence frn g and
a further lead of N0 samples before the downsampler. In particular we have
Let wN D wN I C j wN Q be a complex Gaussian r.v. with zero mean and unit variance; note
that wN I D Re [w]
N and wN Q D Im [w].
N In polar notation,
wN D A e j' (1.651)
It can be shown that ' is a uniform r.v. in [0; 2³ /, and A is a Rayleigh r.v. with probability
distribution
( 2
1 ea a>0
P[A a] D (1.652)
0 a<0
Observing (1.652) and (1.651), if u 1 and u 2 are two uniform r.v.s in the interval [0; 1/,
then
p
A D ln.1 u 1 / (1.653)
and
' D 2³ u 2 (1.654)
are two statistically independent Gaussian r.v.s, each with zero mean and variance equal
to 0.5.
The r.v. wN is also called circularly symmetric Gaussian r.v., as the real and imaginary
components, being statistically independent with equal variance, have a circularly symmetric
Gaussian joint probability density function.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 2
The theory of the Wiener filter [1, 2] that will be presented in this chapter is fundamental
to the comprehension of several important applications. The development of this theory
assumes the knowledge of the correlation functions of the relevant processes. An approxi-
mation of the Wiener filter can be obtained by least squares methods, through realizations
of the processes involved.
X
N 1
y.k/ D cn x.k n/ (2.1)
nD0
If d.k/ is the desired sample at the filter output at instant k, we define the estimation
error as
In the Wiener theory, to cause the filter output y.k/ to replicate as closely as possible d.k/,
the coefficients of the filter are determined using the minimum mean-square error (MMSE)
criterion. Therefore the cost function is defined as
J D E[je.k/j2 ] (2.3)
and the coefficients of the optimum filter are those that minimize J :
min J (2.4)
fcn g;nD0;1;:::;N 1
130 Chapter 2. The Wiener filter and linear prediction
The Wiener filter problem can be formulated as the problem of estimating d.k/ by a
linear combination of x.k/; : : : ; x.k N C 1/. A brief introduction to estimation theory
is given in Appendix 2.A; in the second half of the Appendix, of which reading should
be deferred until the end of this section, the formulation of the Wiener theory is further
extended to the case of vector signals.
Matrix formulation
The problem introduced in the previous section is now formulated using matrix notation.
We define:1
1. Coefficient vector
cT D [c0 ; c1 ; : : : ; c N 1 ] (2.5)
1 The components of an N -dimensional vector are usually identified by an index varying either from 1 to N or
from 0 to N 1.
2.1. The Wiener filter 131
Moreover,
We express now the cost function J as a function of the vector c. We will then seek that
particular vector copt that minimizes J . Recalling the definition
2. Correlation between the desired output at instant k and the filter input vector at the
same instant
2 3
E[d.k/x Ł .k/]
6 7
6 E[d.k/x Ł .k 1/] 7
6 7
rdx D E[d.k/xŁ .k/] D 6
6 ::
7Dp
7 (2.13)
6 : 7
4 5
E[d.k/x .k N C 1/]
Ł
Moreover, it holds
Then
J D ¦d2 c H p p H c C c H Rc (2.17)
and
R D R I C jR Q (2.21)
If now we take the derivative of the terms of (2.17) using (2.19), we find
rc I p H c D rc I p H .c I C jc Q / D p H (2.22)
rc Q p H c D rc Q p H .c I C jc Q / D jp H (2.23)
rc I c H p D rc I .cTI jcTQ /p D p (2.24)
rc I .c H Rc/ D 2R I c I 2R Q c Q (2.26)
rc Q .c H Rc/ D 2R I c Q C 2R Q c I (2.27)
rc p H c D 0 (2.28)
H
rc c p D 2p (2.29)
rc c H Rc D 2Rc (2.30)
For the optimum coefficient vector copt the components of rc J are all equal to zero,
hence we get
Rcopt D p (2.32)
Observation 2.1
The computation of the optimum coefficients copt requires the knowledge only of the input
correlation matrix R and of the cross-correlation vector p between the desired output and
the input vector.
In scalar form, the Wiener–Hopf equation is a system of N equations in N unknowns:
X
N 1
copt;i rx .n i/ D rdx .n/ n D 0; 1; : : : ; N 1 (2.33)
i D0
Corollary 2.1
For c D copt , e.k/ and y.k/ are orthogonal:
d(k)
e(k)
y(k)
2 Note that orthogonality holds only if e and x are considered at the same instant. In other words, the notion of
orthogonality between random variables is used.
2.1. The Wiener filter 135
where ¹i D uiH .c copt /. The vector ν may be interpreted as a translation and a rotation
of the vector c. Then J assumes the form:
J D Jmin C ν H ν
X
N
D Jmin C ½i j¹i j2
i D1 (2.47)
X
N
D Jmin C ½i juiH .c copt /j2
i D1
136 Chapter 2. The Wiener filter and linear prediction
The result (2.47) expresses the excess mean-square error J Jmin as the sum of N compo-
nents in the direction of each eigenvector of R. Note that each component is proportional
to the corresponding eigenvalue.
The above observation allows us to deduce that J increases more rapidly in the direction
of the eigenvector corresponding to the maximum eigenvalue ½max . Likewise the increase is
slower in the direction of the eigenvector corresponding to the minimum eigenvalue ½min .
Let u½max and u½min denote the eigenvalues of R in correspondence of ½max and ½min ,
respectively; it follows that rc J is largest along u½max . This is also observed in Figure 2.4,
where sets (loci) of points c for which a constant value of J is obtained are graphically
represented. In the 2-dimensional case they trace ellipses with axes that are parallel to the
direction of the eigenvectors and ratio of axes that is related to the value of the eigenvalues.
Example 2.1.1
Let d.k/ D h Ł x.k/, as shown in Figure 2.5. In this case, from Table 1.3,
Pdx .z/ D Px .z/H .z/ (2.51)
2.1. The Wiener filter 137
d(k)
h
+
-
c
x(k) y(k) e(k)
Using (2.50):
Z 1
2Tc Pdx . f /
Jmin D ¦d2 Pdx
Ł
. f/ df
1
2T Px . f /
c
Z 1
2Tc jPdx . f /j2
D ¦d2 df (2.54)
1
2T Px . f /
c
Z 1
2Tc jPdx .e j2³ f Tc /j2
D ¦d2 Tc df
1
2T Px .e j2³ f Tc /
c
Example 2.1.2
We want to filter the noise from a signal given by one complex sinusoid (tone) plus noise, i.e.
where D is a known delay. We also assume that ' 2 U.0; 2³ /, and w is white noise with
zero mean and variance ¦w2 , uncorrelated with '. The autocorrelation function of x and the
138 Chapter 2. The Wiener filter and linear prediction
For a Wiener filter with N coefficients, the autocorrelation matrix R and the vector p have
the following structure:
2 3
A2 C ¦w2 A2 e j!0 : : : A2 e j!0 .N 1/
6 A2 e j!0 A2 C ¦w2 : : : A2 e j!0 .N 2/ 7
6 7
RD6
6 :: :: ::
7
7 (2.59)
4 : : : ::: 5
A2 e j!0 .N 1/ A2 e j!0 .N 2/ : : : A2 C ¦w2
2 3
1
6 e j!0 7
6 7
pD6
6 ::
7 AB e j!0 D
7 (2.60)
4 : 5
e j!0 .N 1/
Defining
where 3 D A2 =¦w2 is the signal-to-noise ratio. From (2.40) the minimum value of the cost
function J is given by
ABe j!0 D B2
Jmin D B 2 ABe j!0 D E H .!0 /E.!0 / D (2.66)
¦w2 C N A2 1 C N3
2.1. The Wiener filter 139
D E H .!/copt (2.67)
B 3e j!0 D NX
1
D e j .!!0 /i
A 1 C N 3 i D0
that is, 8
> B N 3e j!0 D
>
< ! D !0
A 1 C N3
Copt .e j! / D j!0 D 1 e j .!!0 /N
(2.68)
: 3e
>
> B
! 6D !0
A 1 C N 3 1 e j .!!0 /
We observe that, for 3 × 1,
1. Jmin becomes negligible;
B j!0 D
2. copt D e E.!0 /;
AN
B
3. jCopt .e j!0 /j D .
A
Figure 2.6. Magnitude of Copt .e j2³ fTc / given by (2.68) for f0 Tc D 1=2, B D A, 3 D 30 dB, and
N D 35.
140 Chapter 2. The Wiener filter and linear prediction
Conversely, for 3 ! 0, i.e. when the power of the useful signal is negligible with respect
to the power of the additive noise, it results in
1. Jmin D B 2 ;
2. copt D 0;
3. jCopt .e j!0 /j D 0.
Indeed, as the signal-to-noise ratio vanishes the best choice is to set the output y to zero.
The plot of jCopt .e j2³ f Tc /j is given in Figure 2.6.
The one-step forward predictor of order N , given xT .k 1/, attempts to estimate the
value of x.k/. There exists also the problem of predicting x.k N /, given the values of
x.k N C 1/; : : : ; x.k/. In this case, the system is called the one-step backward predictor
of order N .
c1 c2 c N-1 cN
^ x (k-1) )
x(k|
X
N (2.71)
D x.k/ ci x.k i/
i D1
where a D copt . Substituting (2.82) in (2.81) and taking care to extend the equation also
to i D 0, we obtain
X
N
f N .k/ D 0
ai;N x.k i/ (2.84)
i D0
f (k)
N
With a similar procedure, we can derive the filter that gives the backward linear prediction
error,
X
N
b N .k/ D x.k N / gi x.k i C 1/ (2.86)
i D1
where B is the backward operator that orders the elements of a vector backward, from the
last to the first (see page 27).
ž N D 1. From
" #" # " #
rx .0/ rŁx .1/ a00;1 J1
D (2.88)
rx .1/ rx .0/ a01;1 0
it results
8
> J1
< a0;1 D 1r rx .0/
>
> 0
(2.89)
>
> J1
>
: a01;1 D rx .1/
1r
where
þ þ
þ r .0/ rŁ .1/ þ
þ x x þ
1r D þ þ D r2x .0/ jrx .1/j2 (2.90)
þ rx .1/ rx .0/ þ
ž N D 2.
8 8
> rx .1/rx .0/ rŁx .1/rx .2/ > ².1/ ² Ł .1/².2/
>
> a01;2 D >
> copt;1 D
< r2x .0/ jrx .1/j2 < 1 j².1/j2
)
>
> r .0/rx .2/ r2x .1/ >
> ².2/ ² 2 .1/
>
: a02;2 D x >
: copt;2 D
r2x .0/ jrx .1/j2 1 j².1/j2
(2.92)
and
We note that in the above equations ².n/ is the correlation coefficient of x, introduced
in (1.540).
2.2. Linear prediction 145
J0 D rx .0/ (2.94)
10 D rx .1/ (2.95)
1n D .rnC1
B
/T an0 (2.99)
Jn D Jn1 .1 jCn j2 / (2.100)
We now interpret the physical meaning of the parameters in the algorithm. Jn represents
the statistical power of the forward prediction error at the n-th iteration:
It results in
with
J0 D rx .0/ (2.103)
and
Y
N
J N D J0 .1 jCn j2 / (2.104)
nD1
146 Chapter 2. The Wiener filter and linear prediction
In other words, 1n can be interpreted as the cross-correlation between the forward linear
prediction error and the backward linear prediction error delayed by one sample. Cn satisfies
the following property:
Cn D an;n
0
(2.106)
Finally, by substitution, from (2.96), along with (2.101) and (2.105), we get
E[ f n1 .k/bn1
Ł .k 1/]
Cn D (2.107)
E[j f n1 .k/j2 ]
and, noting that E[j f n1 .k/j2 ] D E[jbn1 .k/j2 ] D E[jbn1 .k 1/j2 ], from (2.107) we have
jCn j 1 (2.108)
The coefficients fCn g are called reflection coefficients or partial correlation coefficients
(PARCOR).
Lattice filters
We have just described the Levinson–Durbin algorithm. Its analysis permits us to implement
the prediction error filter via a modular structure. Defining
we can write:
" # " #
xn .k/ x.k/
xnC1 .k/ D D (2.110)
x.k n/ xn .k 1/
We recall the relation for forward and backward linear prediction error filters of order n:
8
0 x.k n/ D a0 T x
< f n .k/ D a0 x.k/ C Ð Ð Ð C an;n
0;n n nC1 .k/
(2.111)
: b .k/ D a0 Ł x.k/ C Ð Ð Ð C a0 Ł x.k n/ D a0 B H x .k/
n n;n 0;n n nC1
C1 Cm CN
x(k)
C *1 C m* C N*
Tc Tc Tc
b0 (k) b1 (k) bm-1 (k) bm (k) bN-1 (k) bN (k)
the block diagram of Figure 2.10 is obtained, in which the output is given by f N .
We list the following fundamental properties.
Observation 2.2
From the above property 2 and (2.108), we find that all predictor error filters are minimum
phase.
1. Initialization. We set
3 Faster algorithms, with a complexity proportional to N .log N /2 , have been proposed by Kumar [4].
148 Chapter 2. The Wiener filter and linear prediction
.þn1 n1 /
Þn D (2.117)
.þn2 n2 /
þn D 2þn1 Þn þn2 (2.118)
2 3
½ ½ 0
vn1 0
vn D C Þn 4 vn2 5 (2.119)
0 vn1
0
n D rnC1
T
vn D .rx .1/ C rx .n C 1// C [vn ]2 .rx .2/ C rx .n// C Ð Ð Ð (2.120)
þn
½n D (2.121)
þn1
½
0
an0 D vn ½n (2.122)
vn1
Jn D þn ½n n1 (2.123)
Cn D 1 ½n (2.124)
We note that (2.120) exploits the symmetry of the vector vn ; in particular it is [vn ]1 D
[vn ]nC1 D 1.
where y.k/ is given by (2.1), according to the least squares method the optimum filter
coefficients yield the minimum of the sum of the squared errors:
min E (2.127)
fcn g;nD0;1;:::;N 1
2.3. The least squares (LS) method 149
where
X
K 1
ED je.k/j2 (2.128)
kDN 1
Note that in the LS method a time average is substituted for the expectation (2.3), which
gives the MSE.
Data windowing
In matrix notation, the output fy.k/g, k D N 1; : : : ; K 1, given by (2.1), can be
expressed as
2 3 2 32 3
y.N 1/ x.N 1/ x.N 2/ ::: x.0/ c0
6 y.N / 7 6 x.N / x.N 1/ ::: x.1/ 76 c1 7
6 7 6 76 7
6 :: 7D6 :: :: :: :: 76 :: 7 (2.129)
4 : 5 4 : : : : 54 : 5
y.K 1/ x.K 1/ x.K 2/ : : : x.K N / c N 1
| {z }
data matrix T
In (2.129) we note that the input data sequence actually used goes from x.0/ to x.K 1/.
Other choices are possible for the input data window. The case examined is called the
covariance method and the data matrix T, defined by (2.129), is LðN where L D K N C1.
Matrix formulation
We define
X
K 1
8.i; n/ D x Ł .k i/ x.k n/ i; n D 0; 1; : : : ; N 1 (2.130)
kDN 1
X
K 1
#.n/ D d.k/ x Ł .k n/ n D 0; 1; : : : ; N 1 (2.131)
kDN 1
Using (1.478) for an unbiased estimate of the correlation, the following identities hold:
in which the values of 8.i; n/ depend on both indices .i; n/ and not only upon their
difference, especially if K is not very large. We give some definitions:
1. Energy of fd.k/g
X
K 1
Ed D jd.k/j2 (2.134)
kDN 1
150 Chapter 2. The Wiener filter and linear prediction
E D Ed c H ϑ ϑ H c C c H c (2.137)
Correlation matrix
is the time average of xŁ .k/xT .k/, i.e.
X
K 1
D xŁ .k/xT .k/ (2.138)
kDN 1
Properties of .
1. is Hermitian.
2. is positive semi-definite.
4. can be written as
D TH T (2.139)
with T input data matrix defined by (2.129). We note that the matrix T is Toeplitz.
rc E D 2.c ϑ / (2.140)
Then the vector of optimum coefficients based on the LS method, cls , satisfies the normal
equation
cls D ϑ (2.141)
2.3. The least squares (LS) method 151
In the solution of the LS problem, the equation (2.141) corresponds to the Wiener–Hopf
equation (2.32). As for an ergodic process (2.132) yields:
1
! R (2.143)
K N C1 K !1
and
1
ϑ ! p (2.144)
K N C1 K !1
We find that the LS solution tends toward the Wiener solution for sufficiently large K ,
that is
In other words, for K ! 1 the covariance method gives the same solution as the auto-
correlation method.
In scalar notation, (2.141) becomes a system of N equations in N unknowns:
X
N 1
8.n; i/cls;i D #.n/ n D 0; 1; : : : ; N 1 (2.146)
i D0
X
K 1
D [x Ł .k n/e.k/ x.k n/eŁ .k/
kDN 1
(2.147)
C j . j x Ł .k n/e.k/ j x.k n/eŁ .k//]
X
K 1
D 2 x Ł .k n/e.k/
kDN 1
If we denote with femin .k/g the estimation error found with the optimum coefficient values,
cls , then the optimum coefficients must satisfy the conditions
X
K 1
emin .k/x Ł .k n/ D 0 n D 0; 1; : : : ; N 1 (2.148)
kDN 1
152 Chapter 2. The Wiener filter and linear prediction
which represent the time-average version of the statistical orthogonality principle (2.36).
Moreover, being y.k/ a linear combination of fx.k n/g, n D 0; 1; : : : ; N 1, we have
X
K 1
emin .k/y Ł .k/ D 0 (2.149)
kDN 1
Equation (2.149) expresses the fundamental result: the optimum filter output sequence is
orthogonal to the minimum estimation error sequence.
X
K 1
Ey D jy.k/j2 D c H c (2.151)
kDN 1
then, because of the orthogonality (2.149) between y and emin , it follows that
Ed D E y C Emin (2.153)
Emin D Ed E y (2.154)
where E y D clsH ϑ .
that is
ϑ D TH d (2.157)
Thus, using the (2.139) and (2.157), the normal equation (2.141) becomes
T H Tcls D T H d (2.158)
Associated with system (2.158), it is useful to introduce the system of equations for the
minimization of E,
Tc D d (2.159)
We note how both formulae (2.160) and (2.161) depend only on the desired signal samples
and input samples. Moreover, the solution c is unique only if the columns of T are linearly
independent, that is the case of non-singular T H T. This requires at least K N C 1 > N ,
that is the system of equations (2.159) must be overdetermined with more equations than
unknowns.
y D Tc (2.163)
This relation will still be valid for c D cls , and from (2.160) we get
emin D d y (2.165)
The matrix O D T.T H T/1 T H can be thought of as a projection operator defined on the
space generated by the columns of T. Let I be the identity matrix: the difference
d
emin
y D Od (2.167)
emin D d y D O ? d (2.168)
Emin D emin
H
emin D d H emin D d H O ? d (2.169)
In Figure 2.11 an example illustrating the relation among d, y, and emin is given.
Tc D d (2.170)
with T N ð N square matrix. If T1 exists, the solution c D T1 d is unique and can be
obtained in various ways [5]:
1. If T is triangular and non-singular, a solution to the system (2.170) can be found by
the successive substitutions method with O.N 2 / operations.
2. In general, if T is non-singular, one can use the Gauss method, which involves three
steps:
a. Factorization of T
T D LU (2.171)
with L lower triangular having all ones along the diagonal and U upper triangular;
2.3. The least squares (LS) method 155
Lz D d (2.172)
Uc D z (2.173)
T D LL H (2.174)
with L lower triangular having non-zero elements on the diagonal. This method
requires O.N 3 / operations, about half as many as the Gauss method.
4. If T is Toeplitz and non-singular, one can use the generalized Shur algorithm with a
complexity of O.N 2 /: generally it is applicable to all T structured matrices [6]. We
also recall the Kumar fast algorithm [4].
However, if T1 does not exist, e.g., because T is not a square matrix, it is necessary to
use alternative methods to solve the system (2.170) [5]: in particular we will consider the
method of the pseudo-inverse. First, we will state the following result.
T D UV H (2.175)
with
8 9
>D 0 >
D>
: >
; (2.176)
0 0 LðN
V D [v1 ; v2 ; : : : ; v N ] N ðN VV H D I N ðN (2.179)
156 Chapter 2. The Wiener filter and linear prediction
Definition 2.1
The pseudo-inverse of T, L ð N , of rank R, is given by the matrix
X
R
T# D V # U H D ¦i1 vi uiH (2.181)
i D1
where
½
D1 0
#
D D1 D diag ¦11 ; ¦21 ; : : : ; ¦ R1 (2.182)
0 0
We find an expression of T# for the two cases in which T has full rank,4 that is R D
min.L ; N /.
Case of an overdetermined system (L > N ) and R D N . Note that in this case the system
(2.170) has more equations than unknowns. Using the above relations it can be shown that
T# D .T H T/1 T H (2.183)
In this case T# d coincides with the solution of system (2.141).
Case of an underdetermined system (L < N ) and R D L. Note that in this case there are
fewer equations than unknowns, hence there are infinite solutions to the system (2.170).
Again, it can be shown that
T# D T H .TT H /1 (2.184)
cls D T# d (2.185)
By applying (2.185), the pseudo-inverse matrix T# gives the LS solution of minimum norm;
in other words it solves the problem of finding the vector c that minimizes the squared
error (2.128), E D jjejj2 D jjy djj2 D jjTc djj2 , and simultaneously minimizes the norm
of the solution, jjcjj2 . The constraint on jjcjj2 is needed in those cases in which there is
more than one vector that minimizes jjTc djj2 .
We list the different cases:
1. If L D N and rank.T/ D N , i.e. T is non-singular,
T# D T1 (2.186)
2. If L > N and
a. rank.T/ D N , then
T# D .T H T/1 T H (2.187)
3. If L < N and
a. rank.T/ D L, then
Only solutions (2.185) in the cases (2.186) and (2.187) coincide with the solution (2.142).
The computation of the pseudo-inverse T# directly from SVD and the expansion of c in
terms of fui g, fvi g and f¦i2 g have two advantages with respect to the direct computation of
T# in the form (2.187), for L > N and rank.T/ D N , or in the form (2.189), for L < N
and rank.T/ D L:
158 Chapter 2. The Wiener filter and linear prediction
1. The SVD also gives the rank of T through the number of non-zero singular values.
2. The required accuracy in computing T# via SVD is almost halved with respect to the
computation of .T H T/1 or .TT H /1 .
There are two algorithms to determine the SVD of T: the Jacobi algorithm and the House-
holder transformation [7].
We conclude citing two texts [8, 9], which report examples of realizations of the algo-
rithms described in this section.
Bibliography
[1] S. Haykin, Adaptive filter theory. Englewood Cliffs, NJ: Prentice-Hall, 3rd ed., 1996.
[2] M. L. Honig and D. G. Messerschmitt, Adaptive filters: structures, algorithms and
applications. Boston, MA: Kluwer Academic Publishers, 1984.
[3] P. Delsarte and Y. V. Genin, “The split Levinson algorithm”, IEEE Trans. on Acous-
tics, Speech and Signal Processing, vol. 34, pp. 470–478, June 1986.
[4] R. Kumar, “A fast algorithm for solving a Toeplitz system of equations”, IEEE Trans.
on Acoustics, Speech and Signal Processing, vol. 33, pp. 254–267, Feb. 1985.
[5] G. H. Golub and C. F. van Loan, Matrix computations. Baltimore and London: The
Johns Hopkins University Press, 2nd ed., 1989.
[6] N. Al-Dhahir and J. M. Cioffi, “Fast computation of channel-estimate based equalizers
in packet data transmission”, IEEE Trans. on Signal Processing, vol. 43, pp. 2462–
2473, Nov. 1995.
[7] S. A. T. W. H. Press, B. P. Flannery and W. T. Vetterling, Numerical Recipes. New
York: Cambridge University Press, 3rd ed., 1988.
[8] L. S. Marple Jr., Digital spectral analysis with applications. Englewood Cliffs, NJ:
Prentice-Hall, 1987.
[9] S. M. Kay, Modern spectral estimation-theory and applications. Englewood Cliffs,
NJ: Prentice-Hall, 1988.
[10] S. M. Kay, Fundamentals of statistical signal processing: estimation theory. Engle-
wood Cliffs, NJ: Prentice-Hall, 1993.
2.A. The estimation problem 159
MMSE estimation
Let pd .Þ/ and px .þ/ be the probability density functions of d and x, respectively, and
pdjx .Þ j þ/ the conditional probability density function of d given x D þ; moreover let
px .þ/ 6D 0, 8þ. We wish to determine the function h that minimizes the mean-square error,
that is
Z C1 Z C1
2
J D E[e ] D [Þ h.þ/]2 pdx .Þ; þ/ dÞ dþ
1 1
Z Z (2.193)
C1 C1
D px .þ/ [Þ h.þ/] pdjx .Þ j þ/ dÞ dþ
2
1 1
Theorem 2.2
The estimator h.þ/ that minimizes J is given by the expected value of d given x D þ,
h.þ/ D E[d j x D þ] (2.194)
is minimized for every value of þ. Using the variational method (see Appendix 8.A), we
find that this occurs if
Z C1
2 [Þ h.þ/] pdjx .Þ j þ/ dÞ D 0 8þ (2.196)
1
160 Chapter 2. The Wiener filter and linear prediction
that is for
Z Z
C1 C1 pdx .Þ; þ/
h.þ/ D Þ pdjx .Þ j þ/ dÞ D Þ dÞ (2.197)
1 1 px .þ/
where the notation arg max is defined in (6.21). If the distribution of d is uniform, the MAP
criterion becomes the maximum likelihood (ML) criterion, where
Examples of both MAP and ML criteria are given in Chapters 6 and 14.
Example 2.A.1
Let d and x be two jointly Gaussian r.v.s with mean values md and mx , respectively, and
covariance c D E[.d md /.x mx /]. After several steps, it can be shown that [10]
c
h.þ/ D md C .þ mx / (2.200)
¦x2
Example 2.A.2
Let x D d C w, where d and w are two statistically independent r.v.s. For w 2 N .0; 1/
and d 2 f1; 1g with P[d D 1] D P[d D 1] D 1=2, it can be shown that
x 1 D þ1 ; : : : ; x N D þ N (2.203)
the estimation of d is obtained by applying the following theorem, whose proof is similar
to the case of a single observation.
2.A. The estimation problem 161
Theorem 2.3
The estimator of d, dO D h.x1 ; : : : ; x N /, that minimizes J D E[.d d/
O 2 ] is given by
h.þ1 ; : : : ; þ N / D E[d j x 1 D þ1 ; : : : ; x N D þ N ]
Z C1
D Þ pdjx1 :::x N .Þ j x 1 D þ1 ; : : : ; x N D þ N / dÞ
1 (2.204)
Z
pd;x1 :::x N .Þ; þ1 ; : : : ; þ N /
D Þ dÞ
px1 :::x N .þ1 ; : : : ; þ N /
In the following, to simplify the formulation we will refer to r.v.s with zero mean.
Example 2.A.3
Let d, x D [x1 ; : : : ; x N ]T , be real-valued jointly Gaussian r.v.s with zero mean and the
following second order description:
ž Correlation matrix of observations
R D E[x xT ] (2.205)
ž Cross-correlation vector
p D E[dx] (2.206)
and
Theorem 2.4
Given the vector of observations x, the MMSE linear estimator of d has the following
expression
dO D pT R1 x (2.210)
162 Chapter 2. The Wiener filter and linear prediction
In other words,
copt D R1 p
and the corresponding mean-square error is
Jmin D ¦d2 pT R1 p (2.211)
Note that the r.v.s are assumed to have zero mean.
Observation 2.3
Comparing (2.210) and (2.211) with (2.207) and (2.208), respectively, we note that, in the
case of jointly Gaussian r.v.s, linear estimation coincides with optimum MMSE estimation.
Definition 2.3
The linear minimum mean-square error (LMMSE) estimator, consisting of the N ð M
matrix C, and of the M ð 1 vector b, coincides with the linear function of the observations
(2.219) that minimizes the cost function
X
M
O 2] D
J D E[jjd djj E[jdm dOm j2 ] (2.220)
mD1
In other words, the optimum coefficients C and b are the solution of the following problem:
min J (2.221)
C;b
2.A. The estimation problem 163
being E[d] D E[x] D 0. The (2.224) implies that the choice bQ D 0 yields the minimum
value of J . Without loss of generality, we will assume that both x and d are zero mean
random vectors.
with c1 column vector with N coefficients. In this case the problem (2.221) leads again to
the Wiener filter; the solution is given by
Rx c1 D rd1 x (2.226)
Vector case. For M > 1, d and dO are M-dimensional vectors. Nevertheless, since the
function (2.220) operates on single components, the vector problem (2.221) leads to M
scalar problems, each with input x and output dO1 ; dO2 ; : : : ; dOM , respectively. Therefore the
columns of the matrix C, cm , satisfy equations of the type (2.226),
Rx cm D rdm x m D 1; : : : ; M (2.227)
C D R1
x Rxd (2.228)
dO D .R1
x Rxd / x
T
(2.229)
164 Chapter 2. The Wiener filter and linear prediction
e D d dO (2.230)
J D tr[Re ] (2.232)
Chapter 3
We reconsider the Wiener filter introduced in Section 2.1. Given two random processes x
and d, we want to determine the coefficients of a FIR filter having input x, so that the filter
output y is a replica as accurate as possible of the process d. Adopting, for example, the
mean-square error criterion, it is required that the autocorrelation matrix R of the filter input
vector, and the cross-correlation p between the desired output and the input vector be known
(see (2.15)). Estimating these correlations is usually difficult.1 Moreover, the optimum
solution requires solving a system of equations with a computational complexity that is
at least proportional to the square of the number of filter coefficients. In this chapter, we
develop iterative algorithms with low computational complexity to obtain an approximation
of the Wiener solution.
We will consider transversal FIR filters2 with N coefficients. In general the coefficients
may vary with time. The filter structure at instant k is illustrated in Figure 3.1. We define:
1. Coefficient vector at instant k:
cT .k/ D [c0 .k/; c1 .k/; : : : ; c N 1 .k/] (3.1)
Comparing y.k/ with the desired response d.k/, we obtain the estimation error3
e.k/ D d.k/ y.k/ (3.4)
- y(k)
e(k)
+ d(k)
Depending on the cost function associated with fe.k/g, in Chapter 2 two classes of algo-
rithms have been developed:
1. mean-square error (MSE),
2. least squares (LS).
In the following sections we will present iterative algorithms for each of the two classes.
Assuming that x and d are individually and jointly WSS, analogously to (2.17), J .k/ can
be written as
where R and p are defined respectively in (2.16) and (2.15). The optimum Wiener–Hopf
solution is c.k/ D copt , where copt is given by (2.34). The corresponding minimum value
of J .k/ is Jmin , given by (2.40).
where rc.k/ J .k/ denotes the gradient of J .k/ with respect to c (see (2.18)), ¼ is the
adaptation gain, a real-valued positive constant, and k is the iteration index, in general not
necessarily coinciding with time instants.
As J .k/ is a quadratic function of the vector of coefficients, from (2.31) we find
hence
In the scalar case (N D 1), for real-valued signals the above relations become:
and
@
rc0 J .k/ D J .k/ D 2rx .0/.c0 .k/ copt;0 / (3.11)
@c0
The behavior of J and the sign of rc0 J .k/ as a function of c0 is illustrated in Figure 3.2.
In the two-dimensional case (N D 2), the trajectory of rc J .k/ is illustrated in Figure 3.3.
We recall that in general the gradient vector for c D c.k/ is orthogonal to the locus of
points with constant J that includes c.k/.
0>∆
0<∆
0=∆
Jmin
Figure 3.2. Behavior of J and sign of the gradient vector rc in the scalar case (N D 1).
168 Chapter 3. Adaptive transversal filters
Figure 3.3. Loci of points with constant J and trajectory of rc J in the two-dimensional case
(N D 2).
Starting at k D 0 with arbitrary c.0/, that is with 1c.0/ D c.0/ copt , we determine now
the conditions for the convergence of c.k/ to copt or, equivalently, for the convergence of
1c.k C 1/ to 0.
Using the decomposition (1.369), R D UU H , where U is the unitary matrix formed
of eigenvectors of R, and is the diagonal matrix of eigenvalues f½i g, i D 1; : : : ; N , and
setting (see (2.46))
if and only if
or, equivalently,
As ½i is positive (see (1.360)), we have that the algorithm converges if and only if
2
0<¼< i D 1; 2; : : : ; N (3.23)
½i
If ½max (½min ) is the largest (smallest) eigenvalue of R, observing (3.23) the convergence
condition can be expressed as
2
0<¼< (3.24)
½max
ν (k)
i
0 1 2 3 4 5 6 k
Figure 3.4. Plot of ¹i .k/ as a function of k for ¼½i < 1 and j1 ¼½i j < 1. In the case ¼½i > 1
and j1 ¼½i j < 1, ¹i .k/ is still decreasing in magnitude, but it assumes alternating positive
and negative values.
170 Chapter 3. Adaptive transversal filters
Correspondingly, observing (3.16) and (3.19) we obtain the expression of the vector of
coefficients, given by
X
N
D copt C ui ¹i .k/ (3.25)
i D1
X
N
D copt C ui .1 ¼½i /k ¹i .0/
i D1
where u i;n is the n-th component of ui . In (3.26) the term u i;n .1 ¼½i /k ¹i .0/ characterizes
the i-th mode of convergence. Note that, if the convergence condition (3.23) is satisfied,
each coefficient cn , n D 0; : : : ; N 1, converges to the optimum solution as a weighted
sum of N exponentials, each with the time constant4
1 1
−i D ' i D 1; 2; : : : ; N (3.27)
ln j1 ¼½i j ¼½i
where the approximation is valid if ¼½i − 1.
4 The time constant is the number of iterations needed to reduce the signal associated with the i-th mode of a
factor e.
3.1. Adaptive transversal filter: MSE criterion 171
|1- µλ i |
1
ξ(µ)
0
0 1 µopt 1 1 µ
λmax λi λmin
Figure 3.5. Plot of ¾ and j1¼½i j as a function of ¼ for different values of ½i : ½min < ½i < ½max .
and
X
N
J .k/ D Jmin C ½i j¹i .k/j2 (3.33)
i D1
X
N
J .k/ D Jmin C ½i .1 ¼½i /2k ¹i2 .0/ (3.34)
i D1
0.8
0.6
ξ(µopt)
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
χ(R)
Figure 3.6. ¾.¼opt / as a function of the eigenvalue spread .R/ D ½max =½min .
assuming ¼½i − 1. We note that (3.34) is different from (3.26) because of the presence of
½i as weight of the i-ith mode: consequently, modes associated with small eigenvalues tend
to weigh less in the convergence of J .k/. In particular, let us examine the two-dimensional
case (N D 2). Recalling the observation that J .k/ increases more rapidly (slowly) in the
direction of the eigenvector corresponding to ½ D ½max (½ D ½min ) (see Figure 2.4), we
have the following two cases.
Case 1 for ½1 − ½2 . Choosing c.0/ on the ¹2 axis (in correspondence of ½max ) we have
the following situations:
8
> 1 the iterative algorithm has a non-oscillatory
>
> <
>
> ½ max behavior
>
>
>
< 1 the iterative algorithm converges in one
if ¼ D (3.36)
>
> ½ max iteration
>
>
>
>
>
> 1 the iterative algorithm has a trajectory that
:>
½max oscillates around the minimum
Let u½min and u½max be the eigenvectors corresponding to ½min and ½max , respectively. If
no further information is given regarding the initial condition c.0/, choosing ¼ D ¼opt the
algorithm exhibits monotonic convergence along u½min , and an oscillatory behavior around
the minimum along u½max .
3.1. Adaptive transversal filter: MSE criterion 173
O
R.k/ D xŁ .k/xT .k/ (3.37)
and
p.k/
O D d.k/xŁ .k/ (3.38)
rc.k/ J .k/ D 2d.k/xŁ .k/ C 2xŁ .k/xT .k/c.k/
D 2xŁ .k/e.k/
The equation for the adaptation of the filter coefficients, where k now denotes a given
instant, becomes
c.k C 1/ D c.k/ 12 ¼rc.k/ J .k/ (3.40)
that is
Observation 3.1
The same equation is obtained for a cost function equal to je.k/j2 , whose gradient is
given by
rc.k/ J .k/ D 2e.k/xŁ .k/ (3.42)
Implementation
The block diagram for the implementation of the LMS algorithm is shown in Figure 3.7,
with reference to the following parameters and equations.
5 We note that (3.39) represents an unbiased estimate of the gradient (3.8), but in general it also exhibits a large
variance.
174 Chapter 3. Adaptive transversal filters
y(k)
e(k) -
+
µ d(k)
Figure 3.7. Block diagram of an adaptive transversal filter adapted according to the LMS
algorithm.
Adaptation.
1. Estimation error
c.0/ D 0 (3.46)
The accumulators (ACC) in Figure 3.7 are used to memorize coefficients, which are updated
by the current value of ¼e.k/xŁ .k/.
3.1. Adaptive transversal filter: MSE criterion 175
Computational complexity
For every iteration we have 2N C 1 complex multiplication (N due to filtering and N C 1
to adaptation) and 2N complex additions. Therefore the LMS algorithm has a complexity
of O.N /.
Canonical model
The LMS algorithm operates with complex-valued signals and coefficients. We can rewrite
complex-valued quantities as follows.
Using the above definitions and considering separately real and imaginary terms, we derive
the new equations:
2
x(k)
−2
0 50 k 100 150
0
E[c(k)]
c(k)
−0.5
−1
−a
0 50 k 100 150
0
J(k)=E[ |e(k)|2 ]
J(k) , |e(k)|2 (dB)
−2
−4
−6
−8
−10
J
min
−12
0 50 100 150
k
Figure 3.8. Realizations of input fx.k/g, coefficient fc.k/g and squared error fje.k/j2 g for a
one-coefficient predictor (N D 1), adapted according to the LMS algorithm.
In other words, it is required that the mean of the iterative solution converges to the Wiener–
Hopf solution and the mean of the estimation error approaches zero. To show the weakness
of this criterion, in Figure 3.8 we illustrate the results of a simple experiment for an input
x given by a real-valued AR(1) random process:
x.k/ D a x.k 1/ C w.k/ w.k/ 2 N 0; ¦w2 (3.60)
1. Predictor output
2. Prediction error
3. Coefficient update
In Figure 3.8, realizations of the processes fx.k/g, fc.k/g and fje.k/j2 g are illustrated,
as well as mean values E[c.k/] and J .k/ D E[je.k/j2 ], estimated by averaging over 500
realizations; copt and Jmin represent the Wiener–Hopf solution. From the plots in Figure 3.8
we observe two facts:
1. The random processes x and c exhibit a completely different behavior, for which
they may be considered uncorrelated. It is interesting to observe that this hypothesis
corresponds to assuming the filter input vectors, fx.k/g, statistically independent.
Actually, for small values of ¼, c follows mean statistical parameters associated
with the process x and not the process itself.
In other words, at convergence, both the mean of the coefficient error vector norm and
the output mean-square error must be finite. The quantity J .1/ Jmin D Jex .1/ is the
MSE in excess and represents the price paid for using a random adaptation algorithm for
the coefficients rather than a deterministic one, such as the steepest-descent algorithm. In
any case, we will see that the ratio Jex .1/=Jmin can be made small by choosing a small
adaptation gain ¼. We note that the coefficients are obtained by averaging in time the
quantity ¼e.k/xŁ .k/. Choosing a small ¼ the adaptation will be slow and the effect of the
gradient noise on the coefficients will be strongly attenuated.
This assumption is justified by the observation that the linear transformation that
orthogonalizes both x.k/ (see (1.368)) and 1c.k/ (see (3.16)) in the gradient algorithm
is given by U H .
3. Fourth-order moments can be expressed as products of second-order moments (see
(3.97)).
The adaptation equation of the LMS algorithm (3.41) can thus be written as
We note that 1c.k/ depends only on the terms x.k 1/; x.k 2/; : : :
Moreover, with the change of variables (3.16), observing (3.33), the cost function6
can be written as
X
N
J .k/ D Jmin C ½i E[j¹i .k/j2 ] (3.72)
i D1
As E[xŁ .k/xT .k/] D R and the second term on the right-hand side of (3.73) vanishes for
the orthogonality property (2.36) of the optimum filter, we obtain the same equation as in
the case of the steepest descent algorithm:
Consequently, for the LMS algorithm the convergence of the mean is obtained if
2
0<¼< (3.75)
½max
Observing (3.25) and (3.32), and choosing the value of ¼ D 2=.½max C ½min /, the vector
E[1c.k/] is reduced at each iteration at least by the factor .½max ½min / = .½max C ½min /.
We can therefore assume that E[1c.k/] becomes rapidly negligible with respect to the
mean-square error during the process of convergence.
7 In other texts the Gaussian assumption is made, whereby E[x 4 .k/] D 3r2x .0/. The conclusions of the analysis
are similar.
180 Chapter 3. Adaptive transversal filters
0.8
0.6
γ
0.4
0.2
0
0 1/ r (0) 2/ r (0)
x x
µ
Likewise, from
e.k/ D d.k/ x.k/c.k/ D emin .k/ 1c.k/x.k/ (3.82)
it turns out
E[e2 .k/] D E[emin
2
.k/] C E[x 2 .k/]E[1c2 .k/] (3.83)
that is
J .k/ D Jmin C rx .0/E[1c2 .k/] (3.84)
In particular, for k ! 1, we have
¼
J .1/ ' Jmin C rx .0/ Jmin (3.85)
2
The relative MSE deviation, or misadjustment, is:
J .1/ Jmin Jex .1/ ¼
MSD D D D rx .0/ (3.86)
Jmin Jmin 2
Let us define
xQ .k/ D [xQ1 .k/; : : : ; xQ N .k/]T D UT x.k/ (3.88)
and
Observing (3.91), the second term on the right-hand side of (3.93) is equal to ¼2 Jmin .
Moreover, considering that ν.k/ is statistically independent of x.k/ and xQ .k/, the first term
can be written as
E [.I ¼Ł /ν Ł .k/ν T .k/.I ¼T /] D E [.I ¼Ł /E ]ν Ł .k/ν T .k/[.I ¼T /]
x;ν x ν
(3.94)
D E[.I ¼ /.I ¼ /]E[ν .k/ν .k/]
Ł T Ł T
Recalling Assumption 2 of the convergence analysis, we find that the matrix E[ν Ł .k/ ν T .k/]
is diagonal, with elements on the main diagonal given by the vector
η T .k/ D [1 .k/; : : : ; N .k/] D [E[j¹1 .k/j2 ]; : : : ; E[j¹ N .k/j2 ]] (3.95)
Observing (3.90), the elements with indices .i; i/ of the matrix expressed by (3.94) are
given by
" #
XN
E i .k/ C ¼2 Ł .i; n/.n; i/n .k/ 2¼.i; i/i .k/
nD1
X
N
D i .k/ C ¼2 E[jxQi .k/j2 jxQn .k/j2 ]n .k/ 2¼½i i .k/ (3.96)
nD1
X
N
D i .k/ C ¼2 ½i ½n n .k/ 2¼½i i .k/
nD1
182 Chapter 3. Adaptive transversal filters
Let
λT D [½1 ; : : : ; ½ N ] (3.98)
B D V diag.¦1 ; : : : ; ¦ N /V H (3.101)
where f¦i g denote the eigenvalues of B, and V is the unitary matrix formed by the eigen-
vectors fvi g of B. After simple steps, similar to those applied to get (3.25) from (3.13), and
using the relation
X
N
N rx .0/ D tr[R] D ½i (3.102)
i D1
X
N
¼Jmin
η.k/ D Ki ¦ik vi C 1 k½0 (3.103)
i D1
2 ¼N rx .0/
where 1 D [1; 1; : : : ; 1]T . In (3.103) the constants fKi g are determined by the initial con-
ditions
¼Jmin
Ki D viH η.0/ 1 i D 1; : : : ; N (3.104)
2 ¼N rx .0/
where the components of η.0/ depend on the choice of c.0/ according to (3.95) and (3.16):
Using (3.95) and (3.98), the cost function J .k/ given by (3.72) becomes
Basic results
From the above convergence analysis, we will obtain some fundamental properties of the
LMS algorithm.
1. The transient behavior of J does not exhibit oscillations; this result is obtained by
observing the properties of the eigenvalues of B.
2. The LMS algorithm converges if the adaptation gain ¼ satisfies the condition
2
0<¼< (3.109)
statistical power of input vector
In fact the adaptive system is stable and J converges to a constant steady-state value
under the conditions j¦i j < 1, i D 1; : : : ; N . This happens if
2
0<¼< (3.110)
N rx .0/
Conversely, if ¼ satisfies (3.110), from (3.99) the sum of the elements of the i-th
row of B satisfies
X
N
[B]i;n D 1 ¼½i .2 ¼N rx .0// < 1 (3.111)
nD1
A matrix with these properties and whose elements are all positive has eigenvalues
with absolute value less than one. In particular, being
X
N
½i D tr[R]
i D1
D N rx .0/
(3.112)
X
N 1
D E[jx.k i/j2 ]
i D0
the condition for convergence in the mean-square implies convergence of the mean.
In other words, convergence in the mean-square imposes a tighter bound to allowable
values of the adaptation gain ¼ than that imposed by convergence of the mean (3.113).
The new bound depends on the number of coefficients, rather than on the eigenvalue
distribution of the matrix R. The relation (3.110) can be intuitively explained noting
that, for a given value of ¼, an increase in the number of coefficients causes an
increase in the excess mean-square error due to fluctuations of the coefficients around
the mean value. Increasing the number of coefficients without reducing the value of
¼ leads to instability of the adaptive system.
3. Equation (3.107) reveals a simple relation between the adaptation gain ¼ and the
value J .k/ in the steady state (k ! 1):
2
J .1/ D Jmin (3.115)
2 ¼N rx .0/
from which the excess MSE is given by
¼N rx .0/
Jex .1/ D J .1/ Jmin D Jmin (3.116)
2 ¼N rx .0/
Observations
1. For ¼ ! 0 all eigenvalues of B tend toward 1.
2. As shown below, a small eigenvalue of the matrix R (½i ! 0) determines a large
time constant for one of the convergence modes of J , as ¦i ! 1. However, a large
time constant of one of the modes implies a low probability that the corresponding
term contributes significantly to the mean-square error.
Proof. If ½i D 0, the i-th row of B becomes .0; : : : ; 0; [B]i;i D 1; 0; : : : ; 0/.
Consequently ¦i D 1 and viT D .0; : : : ; 0; vi;i D 1; 0; : : : ; 0/. As λT vi D 0, from
(3.108) we get Ci D 0.
3.1. Adaptive transversal filter: MSE criterion 185
1 J .k/ Jmin
¼opt .k/ D (3.120)
N rx .0/ J .k/
As the condition J .k/ × Jmin is normally verified at the beginning of the iteration
process, it results
1
¼opt .k/ ' (3.121)
N rx .0/
and
1
J .k C 1/ ' 1 J .k/ (3.122)
N
We note that the number of iterations required to reduce the value of J .k/ by one
order of magnitude is approximately 2:3N .
5. Thus (3.103) indicates that at steady state all elements of η become equal. Con-
sequently, recalling Assumption 2 of the convergence analysis, in steady state the
filter coefficients are uncorrelated random variables with equal variance. The mean
corresponds to the optimum vector copt .
6. In case the LMS algorithm is used to estimate the coefficients of a system that
slowly changes in time, the adaptation gain ¼ has a lower bound larger than 0. In
this case, the value of J “in steady state” varies with time and is given by the sum
of three terms:
where Jmin .k/ corresponds to the Wiener–Hopf solution, Jex .1/ depends instead on
the LMS algorithm and is directly proportional to ¼, and J` depends on the ability
of the LMS algorithm to track the system variations and expresses the lag error in
the estimate of the coefficients. It turns out that J` is inversely proportional to ¼.
Therefore, for time varying systems ¼ must be chosen as a compromise between Jex
and J` and cannot be arbitrarily small [5, 6, 7].
Final remarks
1. The LMS algorithm is easy to implement.
2 2
0<¼< D (3.124)
N rx .0/ statistical power of input vector
4. Jex .1/ is determined by the large eigenvalues of R, whereas the speed of conver-
gence of E[c.k/] is imposed by ½min .
If the eigenvalue spread of R increases, the convergence of E[c.k/] becomes slower;
on the other hand the convergence of J .k/ is less sensitive to this parameter. Note,
however, that the convergence behavior depends on the initial condition c.0/ [8].
Leaky LMS
The leaky LMS algorithm is a variant of the LMS algorithm that uses the following adap-
tation equation:
with 0 < Þ − rx .0/. This equation corresponds to the following cost function:
where, as usual,
In other words, the cost function includes an additional term proportional to the norm of
the vector of coefficients. In steady state we get
It is interesting to give another interpretation to what has been stated. Observing (3.128),
the application of the leaky LMS algorithm results in the addition of a small constant Þ to
the terms on the main diagonal of the correlation matrix of the input process; one obtains
the same result by summing white noise with statistical power Þ to the input process. Both
approaches are useful to make irreversible an ill-conditioned matrix R, or to accelerate
the convergence of the LMS algorithm. It is usually sufficient to choose Þ two or three
orders of magnitude smaller than rx .0/, in order not to modify substantially the original
Wiener–Hopf solution. Therefore the leaky LMS algorithm is used in cases where the
Wiener problem is ill-conditioned, and multiple solutions exist.
Sign algorithm
There are adaptation equations that are simpler to implement, at the expense of a lower
speed of convergence, for the same J .1/. Three versions are:9
8
< sgn.e.k//x .k/
Ł
>
c.k C 1/ D c.k/ C ¼ e.k/ sgn.xŁ .k// (3.130)
>
:
sgn.e.k// sgn.x .k//
Ł
Note that the first version has as objective the minimization of the cost function
Sigmoidal algorithm
As extensions of the algorithms given in (3.128), the following adaptation equations may
be considered:
8
< '.e.k//x .k/
Ł
>
c.k C 1/ D c.k/ C ¼ e.k/'.xŁ .k// (3.132)
>
:
'.e.k//'.xŁ .k//
0.8 β =6
β =12
β =24
0.6
β =48
0.4
0.2
ϕ (a)
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
a
Normalized LMS
In the LMS algorithm, if some x.k/ assume large values, the adaptation algorithm is affected
by strong noise in the gradient. This problem can be overcome by choosing an adaptation
gain ¼ of the type:
¼Q
¼D (3.135)
p C MO x .k/
where 0 < ¼Q < 2, and
X
N 1
MO x .k/ D jjxjj2 D jx.k i/j2 (3.136)
i D0
or, alternatively,
where MO x .k/ is the estimate of the statistical power of x.k/. A simple estimate is obtained
by the iterative equation (see (1.468)):
2. Decreasing ¼. For a time-invariant system, the adaptation gain usually selected for
application with the sign algorithm (3.130) is given by
¼1
¼.k/ D k½0 (3.142)
¼2 C k
with ¼ limited to the range [¼min ; ¼max ]. Typical values are Þ1 ' 1 and Þ2 − 1.
µ1
J(k)
µ2 = µ1 / 2
Jmin
0
K k
1
with ¼ limited to the range [¼min ; ¼max ]. Typical values are m 0 ; m 1 2 f1; 3g
and Þ D 2.
and
a1
'0 D cos1 D 2:28 rad (3.148)
2%
Being rx .0/ D ¦x2 D 1, from the (1.552) we find that the statistical power of w is given by
1 a2
¦w2 D [.1 C a2 /2 a21 ] D 0:0057 D 22:4 dB (3.149)
1 C a2
c ' a (3.150)
192 Chapter 3. Adaptive transversal filters
that is c1 ' a1 , c2 ' a2 , and ¦e2 ' ¦w2 . In any case, the predictor output is given by
y.k/ D cT .k/x.k 1/
(3.151)
D c1 .k/x.k 1/ C c2 .k/ x.k 2/
For the predictor of Figure 3.12 we now consider various versions of the adaptive LMS
algorithm and their relative performance.
Convergence curves are plotted in Figure 3.13 for a single realization and for the mean
(estimated over 500 realizations) of the coefficients and of the squared prediction error, for
¼ D 0:04. In Figure 3.14 a comparison is made between the curves of convergence of the
mean for three values of ¼. We observe that, by decreasing ¼, the excess error decreases,
thus giving a more accurate solution, but the convergence time increases.
Convergence curves are plotted in Figure 3.15 for a single realization and for the mean
(estimated over 500 realizations) of the coefficients and of the squared prediction error,
3.1. Adaptive transversal filter: MSE criterion 193
Figure 3.13. Convergence curves for the predictor of order N D 2, obtained by the standard
LMS algorithm.
−a 0.2
1
1.2 0
1
µ =0.01 −0.2
µ =0.1
0.8
c1(k)
c2(k)
0
µ =0.1
−5
µ =0.04
J(k) (dB)
−10
µ =0.01
−15
−20
σ2
w
−25
0 100 200 300 400 500 600 700 800 900 1000
k
Figure 3.14. Comparison among curves of convergence of the mean obtained by the
standard LMS algorithm for three values of ¼.
194 Chapter 3. Adaptive transversal filters
Figure 3.15. Convergence curves for the predictor of order N D 2, obtained by the leaky LMS.
for ¼ D 0:04 and Þ D 0:01. We note that the steady-state values are worse than in the
previous case.
with
and
1
pD E[jjxjj2 ] D 0:2 (3.160)
10
3.1. Adaptive transversal filter: MSE criterion 195
Figure 3.16. Convergence curves for the predictor of order N D 2, obtained by the normalized
LMS algorithm.
Convergence curves are plotted in Figure 3.16 for a single realization and for the mean
(estimated over 500 realizations) of the coefficients and of the squared prediction error, for
¼Q D 0:08. We note that, with respect to the standard LMS algorithm, the convergence is
considerably faster.
A direct comparison of the convergence curves obtained in the previous examples is
given in Figure 3.17.
A comparison of convergence curves is given in Figure 3.18 for the three versions of the
sign LMS algorithm, for ¼ D 0:04.
It turns out that version (2), where the estimation error in the adaptation equation is not
quantized, yields the best performance in steady state. Version (3), however, yields fastest
convergence. To decrease the prediction error in steady state for versions (1) and (3), the
value of ¼ could be further lowered, at the expense of reducing the speed of convergence.
196 Chapter 3. Adaptive transversal filters
Figure 3.17. Comparison of convergence curves for the predictor of order N D 2, obtained
by three versions of the LMS algorithm.
−a1 0.2
1.2 0
1 ver.3 −0.2
0.8 ver.2
ver.1
c1(k)
c2(k)
−0.4
0.6
ver.1
0.4 −0.6
ver.2 ver.3
0.2 −0.8
0 −a −1
2
−0.2
0 200 400 600 800 1000 0 200 400 600 800 1000
k k
0
ver.3
−5
J(k) (dB)
ver.1
−10
ver.2
−15
−20
σ2
w
−25
0 100 200 300 400 500 600 700 800 900 1000
k
Figure 3.18. Comparison of convergence curves obtained by three versions of the sign LMS
algorithm.
3.2. The recursive least squares (RLS) algorithm 197
Observation 3.2
As observed on page 97, for an AR process x, if the order of the predictor is greater than the
required minimum, the correlation matrix result is ill-conditioned with a large eigenvalue
spread. Thus the convergence of the LMS prediction algorithm can be extremely slow
and can lead to a solution quite different from the Yule–Walker solution. In this case it is
necessary to adopt a method that ensures the stability of the error prediction filter, such as
the leaky LMS.
3. Filter output signal at instant i, obtained for the vector of coefficients c.k/
+
y(i)
-
e(i)
+ d(i)
d.i/ (3.164)
the criterion for the optimization of the vector of coefficients c.k/ is the minimum sum of
squared errors up to instant k. Defining
X
k
E.k/ D ½ki je.i/j2 (3.166)
i D1
we want to find
ž ½ is a forgetting factor, that enables proper filtering operations even with non-
stationary signals or slowly time-varying systems. The memory of the algorithm
is approximately 1=.1 ½/.
Normal equation
Using the gradient method, the optimum value of c.k/ satisfies the normal equation
where
X
k
.k/ D ½ki xŁ .i/xT .i/ (3.169)
i D1
X
k
ϑ .k/ D ½ki d.i/xŁ .i/ (3.170)
i D1
it follows that
and similarly
We now recall the following identity known as matrix inversion lemma [12]. Let
For
and
also called the Kalman vector gain. From (3.178) we have the recursive relation
we note that xT .k/c.k 1/ is the filter output at instant k obtained by using the old
coefficient estimate. In other words, from the a posteriori estimation error
e.k/ D d.k/ xT .k/c.k/ (3.188)
we could say that ž.k/ is an approximated value of e.k/, that is computed before updating
c. In any case the relation holds
c.k/ D c.k 1/ C kŁ .k/ž.k/ (3.189)
In summary, the RLS algorithm consists of four equations:
P.k 1/xŁ .k/
kŁ .k/ D (3.190)
½ C xT .k/P.k 1/xŁ .k/
In (3.190), k.k/ is the input vector filtered by P.k 1/ and normalized by the ½ C xT .k/
P.k 1/xŁ .k/. The term xT .k/P.k 1/xŁ .k/ may be interpreted as the energy of the
filtered input.
X
k
.k/ D ½ki xŁ .i/xT .i/ C Ž½k I with Ž − 1 (3.194)
i D1
so that
.0/ D ŽI (3.195)
This is equivalent to having for k 0 an all zero input with the exception of x.N C 1/ D
.½N C1 Ž/1=2 . Consequently
Typically
100
Ž 1 D (3.197)
rx .0/
where rx .0/ is the statistical power of the input signal. In Table 3.1 we give a version of
the RLS algorithm that exploits the fact that P.k/ (inverse of the Hermitian matrix .k/)
is Hermitian, hence
Initialization
c.0/ D 0
P.0/ D Ž 1 I
For k D 1; 2; : : :
π Ł .k/ D P.k 1/xŁ .k/
1
r.k/ D
½ C x .k/π Ł .k/
T
X
k
Ed .k/ D ½ki jd.i/j2 D ½Ed .k 1/ C jd.k/j2 (3.199)
i D1
Using the expression (3.179), and recalling that .k/ is Hermitian, from (3.184) we obtain
Figure 3.20. Convergence curves for the predictor of order N D 2, obtained by the RLS
algorithm.
1. Algorithms for transversal filters. The fast Kalman algorithm has the same speed of
convergence as the RLS, but with a computational complexity comparable to that
of the LMS algorithm. Exploiting some properties of the correlation matrix .k/,
Falconer and Ljung [14] have shown that the recursive equation (3.193) requires only
10.2N C 1/ multiplications. Cioffi and Kailath [15], with their fast transversal filter
(FTF), have further reduced the number of multiplications to 7.2N C 1/.
The implementation of these algorithms still remains relatively simple; their weak
point resides in the sensitivity of the operations to round off errors in the various co-
efficients and signals. As a consequence the fast algorithms may become numerically
unstable.
2. Algorithms for lattice filters. There are versions of the RLS algorithm for lattice
structures that in the literature are called recursive least squares lattice (LSL) that
have, in addition to a lower computational complexity than the standard RLS form,
strong and weak points similar to those already discussed in the case of the LMS
algorithm for lattice structures [12, 16].
3.4. Block adaptive algorithms in the frequency domain 205
convergence properties of the adaptive process. We will first consider some adaptive algo-
rithms in the frequency domain that offer some advantages from the standpoint of compu-
tational complexity [24, 25, 26, 27].
3.4.1 Block LMS algorithm in the frequency domain: the basic scheme
The basic scheme includes a filter that performs the equivalent operation of a circular
convolution in the frequency domain. As illustrated in Figure 3.21, the method operates
over blocks of N samples. The instant at which a block is processed is k D n N , where
n is an integer number. Each input block is transformed using the DFT (see Section 1.4).
The samples of the transformed sequence are denoted by fX i .n N /g, i D 0; 1; : : : ; N 1.
We indicate with fDi .n N /g and fYi .n N /g, i D 0; 1; : : : ; N 1, respectively, the DFT of
the desired output and of the adaptive filter output. Defining E i .n N / D Di .n N / Yi .n N /,
the LMS adaptation algorithm is expressed as:
In the following, lower case letters will be used to indicate sequences in the time domain,
while upper case letters will denote sequences in the frequency domain.
Computational complexity of the block LMS algorithm via FFT
We consider the computational complexity of the scheme of Figure 3.21 for N -sample real
input vectors. The algorithm requires three N -point FFTs and 2N complex multiplications
to update fCi g and compute fYi g. As for real data the complexity of an N -point FFT in
N CCLMS f =CCLMSt
16 0.41
64 0.15
1024 0.015
A comparison between the computational complexity of the LMS algorithm via FFT and
the standard LMS algorithm is given in Table 3.3. We note that the advantage of the LMS
algorithm via FFT is non negligible even for small values of N .
However, as the product between DFTs of two time sequences is equivalent to a circular
convolution, the direct application of the scheme of Figure 3.21 is appropriate only if the
relation between y and x is a circular convolution rather than a linear convolution.
cT .n N / D [c0 .n N /; c1 .n N /; : : : ; c N 1 .n N /] (3.216)
4. error at instant n N C i
The equation for updating the coefficients according to the block LMS algorithm is given by
X
N 1
c..n C 1/N / D c.n N / C ¼ e.n N C i/xŁ .n N C i/ (3.219)
i D0
As in the case of the standard LMS algorithm, the updating term is the estimate of the
gradient at instant n N , ∇.n N /. The above equations can be efficiently implemented in the
frequency domain by the overlap-save technique (see (1.112)). Assuming L-point blocks,
where for example L D 2N , we define10
C0 T .n N / D DFT[cT .n N /; 0; : : : ; 0] (3.220)
| {z }
N zeros
² ¦
X0 .n N / D diag DFT[x.n N N /; : : : ; x.n N 1/; x.n N /; : : : ; x.n N C N 1/]
| {z } | {z }
block n1 block n
(3.221)
and
Y0 .n N / D X0 .n N /C0 .n N / (3.222)
We give now the equations to update the coefficients in the frequency domain. Let us
consider the m-th component of the gradient,
X
N 1
[∇.n N /]m D e.n N C i/x Ł .n N C i m/ m D 0; 1; : : : ; N 1 (3.224)
i D0
This component is given by the correlation between the error sequence fe.k/g and input
fx.k/g, which is also equal to the convolution between e.k/ and x Ł .k/. Let
then
E0 .n N /DF[d0 .n N / y0 .n N /] (3.230)
½
I N ðN 0 N ðN
C ..n C1/N /DC .n N / C ¼F
0 0
F1 [X0 Ł .n N /E0 .n N /] (3.231)
0 N ðN 0 N ðN
A comparison between the computational complexity of the FLMS algorithm and the stan-
dard LMS is given in Table 3.4.
210
In general,
G1 D G H (3.240)
3. Coefficient vector at instant k
5. Estimation error
where
¼Q
¼i D (3.245)
E[jz i .k/j2 ]
We note that each component of the adaptation gain vector has been normalized using
the statistical power of the corresponding component of the transformed input vector. The
various powers can be estimated, e.g., by considering a small window of input samples or
recursively. Let
where
and
Then
copt D .GŁ Rx G/1 GŁ rdx
Ł 1 Ł
D G1 R1
x G G rdx
(3.251)
D G H R1
x rdx
D G H .R1
x rdx /
where R1
x rdx is the optimum Wiener solution without transformation.
214 Chapter 3. Adaptive transversal filters
Observation 3.3
ž A filter bank can be more effective in separating the various subchannels in frequency,
even if more costly from the point of view of the computational complexity.
3.5. LMS algorithm in a transformed domain 215
ž There are versions of the algorithm where each output z i .k/ is decimated, with the
aim of reducing the number of operations.
ž If fx.k/g and fd.k/g are real-valued signals, the filter coefficients satisfy the Hermitian
property:
N 1
ci .k/ D cŁN 1i .k/ i D 0; 1; : : : ; (3.255)
2
Correspondingly, we have
p N 1
2X
z 0 .k/ D x.k m/ i D0 (3.258)
N mD0
2 NX
1
³.2m C 1/i
z i .k/ D x.k m/ cos i D 1; 2; : : : ; N 1 (3.259)
N mD0 2N
Ignoring the gain factor cos..³=2N /i/, that can be included in the coefficient ci , even the
filtering operation determined by G i .z/ can be implemented recursively [12]. We note that,
if all the signals are real, the scheme can be implemented by using real arithmetic.
ž In general, they require larger computational complexity than the standard LMS.
Figure 3.25. System model in which we want to identify the relation between x and z.
3.6. Examples of application 217
Linear case
Assuming the system between z.k/ and x.k/ can be modelled as a FIR filter, the experiment
illustrated in Figure 3.26 can be adopted to estimate the filter impulse response.
Using an input x, known to both systems, we determine the output of the transversal
filter c with N coefficients
X
N 1
y.k/ D ci .k/x.k i/ D cT .k/x.k/ (3.260)
i D0
Figure 3.26. Adaptive scheme to estimate the impulse response of the unknown system.
and
and
From (3.267) we see that the noise w does not affect the solution copt , consequently the
expectation of (3.262) for k ! 1 (equal to copt ) is also not affected by w. Anyway, as seen
in Section 3.1.3, the noise influences the convergence process and the solution obtained by
the adaptive LMS algorithm. The larger the power of the noise, the smaller ¼ must be so
that c.k/ approaches E[c.k/]. In any case J .1/ 6D 0.
On the other hand, if N < Nh then copt in (3.267) coincides with the first N coefficients
of h, and
As the input x is white, the convergence behavior of the LMS algorithm (3.262) is easily
determined. Let be defined as in (3.79):
where
1 k
E[jj c.k/jj2 ] D k E[jj c.0/jj2 ] C ¼2 N rx .0/ Jmin k½0 (3.272)
1
Example 3.6.1
Consider an unknown system whose impulse response, given in Table 1.4 on page 26 as h 1 ,
has energy equal to 1.06. The noise is additive, white, and Gaussian with statistical power
¦w2 D 0:01. Identification via standard LMS and RLS adaptive algorithms is obtained using
as input a maximal-length PN sequence of length L D 31 and unit power, Mx D 1. For a
filter with N D 5 coefficients, the convergence curves of the mean-square error (estimated
over 500 realizations) are shown in Figure 3.27. For the LMS algorithm, ¼ D 0:1 is chosen,
which leads to a misadjustment equal to MSD D 0:26.
As discussed in Appendix 3.B, as index of the estimate quality we adopt the ratio:
¦w2
3n D (3.275)
E[jj hjj2 ]
Figure 3.27. Convergence curves of the mean-square error for system identification using
LMS and RLS.
220 Chapter 3. Adaptive transversal filters
We note that, even if the input signal is white, the RLS algorithm usually yields a better
estimate than the LMS. However, for systems with a large noise power and/or slow time-
varying impulse responses, the two methods tend to give the same performance in terms of
speed of convergence and error in steady state. As a result it is usually preferable to adopt
the LMS algorithm, as it leads to easier implementation.
where x.i/ 2 A, finite alphabet with M elements. Then z.k/ assumes values in an alphabet
with at most M 3 values, which can be identified by a table or random-access memory (RAM)
method, as illustrated in Figure 3.28. The cost function to be minimized is expressed as
g.x.k//
O D g.x.k//
O C ¼e.k/ (3.280)
In other words, the input vector x.k/ identifies a particular RAM location whose content
is updated by adding a term proportional to the error. In the absence of noise, if the RAM
is initialized to zero, the content of a memory location can be immediately identified by
looking at the output. In practice, however, it is necessary to access each memory location
several times to average out the noise. We note that, if the sequence fx.k/g is i.i.d., x.k/
selects in the average each RAM location the same number of times.
An alternative method consists of setting y.k/ D 0 during the entire time interval devoted
to system identification, and to update the RAM with the values of fd.k/g, according to the
equation
g.x.k//
O D g.x.k//
O C d.k/ k D 0; 1; : : : (3.281)
To complete the identification process, the value at each RAM location is scaled by the
number of updates that have taken place for that location. This is equivalent to considering
g.x/
O D E[g.x/ C w] (3.282)
We note that this method is a block version of the LMS algorithm with block length equal
to the input sequence, where the RAM is initialized to zero, so that e.k/ D d.k/, and ¼ is
given by the relative frequency of each address.
Observation 3.4
In this section and in Appendix 3.B, the observation d and the input x are determined on
the same time domain with sampling period Tc . Often, however, the input is determined on
the domain with sampling period Tc , and the system output signal is determined on Tc =F0 .
Using the polyphase representation (see Section 1.A.9) of d, it is convenient to represent
the estimate of h determined on Tc =F0 as F0 estimates determined on Tc .
General solution
With reference to Figure 3.30, for a general input x to the adaptive filter, the Wiener–Hopf
solution in the z-transform domain is given by (see (2.50))
Pdx .z/
Copt .z/ D (3.289)
Px .z/
3.6. Examples of application 223
Adopting for d and x the model of Figure 3.31, in which w00 and w10 are additive noise
signals uncorrelated with w and s, and using Table 1.3, (3.289) becomes
Pw .z/H Ł .1=z Ł /
Copt .z/ D (3.290)
Pw10 .z/ C Pw .z/H .z/H Ł .1=z Ł /
1
Copt .z/ D (3.291)
H .z/
224 Chapter 3. Adaptive transversal filters
where s is the desired signal, and the sinusoidal term is the interferer. As shown in
Figure 3.32, we take as reference signals
and
At convergence, the two coefficients c1 and c2 change the amplitude and phase of the refer-
ence signal to cancel the interfering tone. The relation between d and output e corresponds
to a notch filter as illustrated in Figure 3.33.
It is easy to see that x2 is obtained from x1 via a Hilbert filter (see Figure 1.28). We
note that in this case x2 can be obtained as a delayed version of x1 .
consists of a replica of the disturbances. At convergence, the adaptive filter output will
attempt to subtract the interference signal, which is correlated to the reference signal, from
the primary signal. The output signal is a replica of the speech waveform, obtained by
removing to the best possible extent the disturbances from the input signal.
Figure 3.36. Configuration to remove the echo of signal A caused by the hybrid B.
hybrid. A similar situation takes place at the central office B, with the roles of the signals
A and B reversed. Because of impedance mismatch, the hybrids give origin to echo signals
that are added to the desired speech signals. For speech waveforms, the echo of signal A
that is generated at the hybrid A can be ignored because it is not perceived by the human
ear. The case for digital transmission is different, as will be discussed in Chapter 16. A
method to remove echo signals is illustrated in Figure 3.36, where y is a replica of the
echo. At convergence, e will consist of the speech signal B only.
Figure 3.37. Antenna array to filter and equalize wideband radio signals.
On the other hand, to cancel a wideband interferer from a periodic signal it is sufficient to
take the output of the adaptive filter (see Figure 3.39).
228 Chapter 3. Adaptive transversal filters
Figure 3.38. Scheme to remove a periodic interferer from a wideband desired signal.
Figure 3.39. Scheme to remove a wideband interferer from a periodic desired signal.
Bibliography
[1] J. R. Treichler, C. R. Johnson Jr., and M. G. Larimore, Theory and design of adaptive
filters. New York: John Wiley & Sons, 1987.
[2] J. J. Shynk, “Adaptive IIR filtering”, IEEE ASSP Magazine, vol. 6, pp. 4–21, Apr.
1989.
[3] G. Ungerboeck, “Theory on the speed of convergence in adaptive equalizers for digital
communication”, IBM Journal of Research and Development, vol. 16, pp. 546–555,
Nov. 1972.
[4] G. H. Golub and C. F. van Loan, Matrix computations. Baltimore and London: The
Johns Hopkins University Press, 2nd ed., 1989.
[5] S. H. Ardalan and S. T. Alexander, “Fixed-point round-off error analysis of the ex-
ponentially windowed RLS algorithm for time varying systems”, IEEE Trans. on
Acoustics, Speech and Signal Processing, vol. 35, pp. 770–783, June 1987.
[6] E. Eweda, “Comparison of RLS, LMS and sign algorithms for tracking randomly time
varying channels”, IEEE Trans. on Signal Processing, vol. 42, pp. 2937–2944, Nov.
1994.
[7] W. A. Gardner, “Nonstationary learning characteristics of the LMS algorithm”, IEEE
Trans. on Circuits and Systems, vol. 34, pp. 1199–1207, Oct. 1987.
[8] V. Solo, “The limiting behavior of LMS”, IEEE Trans. on Acoustics, Speech and
Signal Processing, vol. 37, pp. 1909–1922, Dec. 1989.
[9] S. Haykin, Neural networks: a comprehensive foundation. New York: Macmillan
Publishing Company, 1994.
[10] S. C. Douglas, “A family of normalized LMS algorithms”, IEEE Signal Processing
Letters, vol. 1, pp. 49–51, Mar. 1994.
[11] B. Porat and T. Kailath, “Normalized lattice algorithms for least-squares FIR sys-
tem identification”, IEEE Trans. on Acoustics, Speech and Signal Processing, vol. 31,
pp. 122–128, Feb. 1983.
230 Chapter 3. Adaptive transversal filters
[12] S. Haykin, Adaptive filter theory. Englewood Cliffs, NJ: Prentice-Hall, 3rd ed., 1996.
[13] J. R. Zeidler, “Performance analysis of LMS adaptive prediction filters”, IEEE Pro-
ceedings, vol. 78, pp. 1781–1806, Dec. 1990.
[14] D. Falconer and L. Ljung, “Application of fast Kalman estimation to adaptive equal-
ization”, IEEE Trans. on Communications, vol. 26, pp. 1439–1446, Oct. 1978.
[15] J. M. Cioffi and T. Kailath, “Fast, recursive-least-squares transversal filter for adaptive
filtering”, IEEE Trans. on Acoustics, Speech and Signal Processing, vol. 32, pp. 304–
337, Apr. 1984.
[16] M. L. Honig and D. G. Messerschmitt, Adaptive filters: structures, algorithms and
applications. Boston, MA: Kluwer Academic Publishers, 1984.
[17] J. M. Cioffi, “High speed systolic implementation of fast QR adaptive filters”, in Proc.
ICASSP, pp. 1584–1588, 1988.
[18] F. Ling, D. Manolakis, and J. G. Proakis, “Numerically robust least-squares lattice-
ladder algorithm with direct updating of the reflection coefficients”, IEEE Trans. on
Acoustics, Speech and Signal Processing, vol. 34, pp. 837–845, Aug. 1986.
[19] P. A. Regalia, “Numerical stability properties of a QR-based fast least squares algo-
rithm”, IEEE Trans. on Signal Processing, vol. 41, pp. 2096–2109, June 1993.
[20] S. T. Alexander and A. L. Ghirnikar, “A method for recursive least-squares filtering
based upon an inverse QR decomposition”, IEEE Trans. on Signal Processing, vol. 41,
pp. 20–30, Jan. 1993.
[21] J. M. Cioffi, “The fast adaptive rotor’s RLS algorithm”, IEEE Trans. on Acoustics,
Speech and Signal Processing, vol. 38, pp. 631–653, Apr. 1990.
[22] Z.-S. Liu, “QR methods of O.N / complexity in adaptive parameter estimation”, IEEE
Trans. on Signal Processing, vol. 43, pp. 720–729, Mar. 1995.
[23] J. A. Bucklew, T. G. Kurtz, and W. A. Sethares, “Weak convergence and local stability
properties of fixed step size recursive algorithms”, IEEE Trans. on Information Theory,
vol. 39, pp. 966–978, May 1993.
[24] B. Widrow, “Fundamental relations between the LMS algorithm and the DFT”, IEEE
Trans. on Circuits and Systems, vol. 34, pp. 814–820, July 1987.
[25] C. F. N. Cowan and P. M. Grant, Adaptive filters. Englewood Cliffs, NJ: Prentice-Hall,
1985.
[26] N. J. Bershad and P. L. Feintuch, “A normalized frequency domain LMS adaptive
algorithm”, IEEE Trans. on Acoustics, Speech and Signal Processing, vol. 34, pp. 452–
461, June 1986.
[27] R. P. Bitmead and B. D. O. Anderson, “Adaptive frequency sampling filters”, IEEE
Trans. on Circuits and Systems, vol. 28, pp. 524–543, June 1981.
3. Bibliography 231
[29] B. Widrow and S. D. Stearns, Adaptive signal processing. Englewood Cliffs, NJ:
Prentice-Hall, 1985.
[30] O. Macchi, Adaptive processing: the LMS approach with applications in transmission.
New York: John Wiley & Sons, 1995.
[32] I. J. Gupta and A. A. Ksienski, “Adaptive antenna array for weak interfering signals”,
IEEE Trans. on Antennas Propag., vol. 34, pp. 420–426, Mar. 1986.
[33] L. L. Horowitz and K. D. Senne, “Performance advantage of complex LMS for con-
trolling narrow-band adaptive array”, IEEE Trans. on Circuits and Systems, vol. 28,
pp. 562–576, June 1981.
[36] P. Fan and M. Darnell, Sequence design for communications applications. Taunton:
Research Studies Press, 1996.
[37] D. C. Chu, “Polyphase codes with good periodic correlation properties”, IEEE Trans.
on Information Theory, vol. 18, pp. 531–532, July 1972.
[38] R. L. Frank and S. A. Zadoff, “Phase shift pulse codes with good periodic correlation
properties”, IRE Trans. on Information Theory, vol. 8, pp. 381–382, Oct. 1962.
[39] A. Milewsky, “Periodic sequences with optimal properties for channel estimation
and fast start-up equalization”, IBM Journal of Research and Development, vol. 27,
pp. 426–431, Sept. 1983.
[41] R. Gold, “Optimal binary sequences for spread spectrum multiplexing”, IEEE Trans.
on Information Theory, vol. 13, pp. 619–621, Oct. 1967.
[42] R. Gold, “Maximal recursive sequences with 3-valued recursive cross-correlation func-
tions”, IEEE Trans. on Information Theory, vol. 14, pp. 154–155, Jan. 1968.
[43] S. L. Marple Jr., “Efficient least squares FIR system identification”, IEEE Trans. on
Acoustics, Speech and Signal Processing, vol. 29, pp. 62–73, Feb. 1981.
232 Chapter 3. Adaptive transversal filters
[44] J. I. Nagumo and A. Noda, “A learning method for system identification”, IEEE Trans.
on Automatic Control, vol. 12, pp. 282–287, June 1967.
[45] S. N. Crozier, D. D. Falconer, and S. A. Mahmoud, “Least sum of squared errors
(LSSE) channel estimation”, IEE Proceedings-F, vol. 138, pp. 371–378, Aug. 1991.
[46] N. Benvenuto, “Distortion analysis on measuring the impulse response of a system
using a cross-correlation method”, AT&T Bell Laboratories Technical Journal, vol. 63,
pp. 2171–2192, Dec. 1984.
3.A. PN sequences 233
Maximal-length sequences
Maximal-length sequences are binary PN sequences, also called r-sequences, that are gen-
erated recursively, e.g., using a shift-register (see page 877), and have period equal to
L D 2r 1. Let f p.`/g, ` D 0; 1; : : : ; L 1, p.`/ 2 f0; 1g, be the values assumed by
the sequence in a period. It can be shown that the maximal-length sequences enjoy the
following properties [34, 35].
ž Every non-zero sequence of r bits appears exactly once in each period; therefore all
binary sequences of r bits are generated, except the all zero sequence.
ž The number of bits equal to “1” in a period is 2r 1 , and the number of bits equal to
“0” is 2r 1 1.
ž A subsequence is intended here as a set of consecutive bits of the r-sequence. The
relative frequency of any non-zero subsequence of length i r is
2r i
' 2i (3.297)
2r 1
and the relative frequency of a subsequence of length i < r with all bits equal to
zero is
2r i 1
' 2i (3.298)
2r 1
In both formulae the approximation is valid for a sufficiently large r.
ž The sum of two r-sequences, which are generated by the same shift-register, but with
different initial conditions, is still an r-sequence.
ž The linear span, that determines the predictability of a sequence, is equal to r [36].
In other words, the elements of a sequence can be determined by any 2r consecutive
elements of the sequence itself, while the remaining elements can be produced by a
recursive algorithm (see, e.g., the Berlekamp-Massey algorithm on page 891).
A practical example is given in Figure 3.41 for a sequence with L D 15 (r D 4), which is
generated by the recursive equation
where ý denotes modulo 2 sum. Assuming initial conditions p.1/ D p.2/ D p.3/ D
p.4/ D 1, applying (3.299) we obtain the sequence
0 |{z}
|{z} 1 :::
0 0 1 0 0 1 1 0 1 0 1 1 1 |{z} (3.300)
p.0/ p.1/ p.L1/
Obviously, the all zero initial condition must be avoided. To generate sequences with a
larger period L we refer to Table 3.5. The above properties make an r-sequence, even if
deterministic and periodic, appear as a random i.i.d. sequence from the point of view of
the relative frequency of subsequences of bits. It turns out that an r-sequence appears as
random i.i.d. also from the point of view of the autocorrelation function. In fact, mapping
“0” to “1” and “1” to “C1”, we get the following correlation properties.
1. Mean
1 X
L1
1
p.`/ D (3.301)
L `D0 L
We note that, with the exception of the values assumed for .m/mod L D 0, the spectral
density of maximal length sequences is constant.
3.A. PN sequences 235
r Period L D 2r 1
1 p.`/ D p.` 1/
2 p.`/ D p.` 1/ ý p.` 2/
3 p.`/ D p.` 2/ ý p.` 3/
4 p.`/ D p.` 3/ ý p.` 4/
5 p.`/ D p.` 3/ ý p.` 5/
6 p.`/ D p.` 5/ ý p.` 6/
7 p.`/ D p.` 6/ ý p.` 7/
8 p.`/ D p.` 2/ ý p.` 3/ ý p.` 4/ ý p.` 8/
9 p.`/ D p.` 5/ ý p.` 9/
10 p.`/ D p.` 7/ ý p.` 10/
11 p.`/ D p.` 9/ ý p.` 11/
12 p.`/ D p.` 2/ ý p.` 10/ ý p.` 11/ ý p.` 12/
13 p.`/ D p.` 1/ ý p.` 11/ ý p.` 12/ ý p.` 13/
14 p.`/ D p.` 2/ ý p.` 12/ ý p.` 13/ ý p.` 14/
15 p.`/ D p.` 14/ ý p.` 15/
16 p.`/ D p.` 11/ ý p.` 13/ ý p.` 14/ ý p.` 16/
17 p.`/ D p.` 14/ ý p.` 17/
18 p.`/ D p.` 11/ ý p.` 18/
19 p.`/ D p.` 14/ ý p.` 17/ ý p.` 18/ ý p.` 19/
20 p.`/ D p.` 17/ ý p.` 20/
CAZAC sequences
The constant amplitude zero autocorrelation (CAZAC) sequences are complex-valued PN
sequences with constant amplitude (assuming values on the unit circle) and autocorrelation
function r p .n/ equal to zero for .n/mod L 6D 0. Because of these characteristics they are
also called polyphase sequences [37, 38, 39]. Let L and M be two integer numbers that
are relatively prime. The CAZAC sequences are defined as,
M³ `2
for L even p.`/ D e j L ` D 0; 1; : : : ; L 1 (3.304)
M³ `.`C1/
for L odd p.`/ D ej L ` D 0; 1; : : : ; L 1 (3.305)
It can be shown that, in both cases, these sequences have the following properties.
236 Chapter 3. Adaptive transversal filters
1. Mean
1 X
L1
p.`/ D 0 (3.306)
L `D0
2. Correlation
(
1 for .n/mod L D 0
r p .n/ D (3.307)
0 otherwise
3. Spectral density
1
Pp m D Tc (3.308)
L Tc
Gold sequences
In a large number of applications, as for example in spread-spectrum systems with code-
division multiple access (see Chapter 10), sets of sequences having one or both of the
following properties [40] are required.
ž Each sequence of the set must be easily distinguishable from its own time shifted
versions.
ž Each sequence of the set must be easily distinguishable from any other sequence of
the set and from its time-shifted versions.
An important class of periodic binary sequences that satisfy these properties, or, in other
words, that have good autocorrelation and cross-correlation characteristics, is the set of
Gold sequences [41, 42].
Then the CCS between the two r-sequences a and b assumes only three values [35, 36]:
1 X
L1
rab .n/ D a.`/bŁ .` n/mod L
L `D0
8 r Ce r Ce2 (3.312)
>
> 1 C 2 2 (value assumed 2r e1 C 2 2 times)
1 <
D 1 (value assumed 2r 2r e 1 times)
L>>
: r Ce r Ce2
1 2 2 (value assumed 2r e1 2 2 times)
The CCS between the two sequences, assuming “0” is mapped to “1”, is:
1
frab .n/g D .7; 7; 1; 1; 1; 9; 7; 9; 7; 7; 1; 1; 7; 7; 1; 7; 1;
31 (3.315)
1; 9; 1; 1; 1; 1; 9; 1; 7; 1; 9; 9; 7; 1/
Construction of a set of Gold sequences. A set of Gold sequences can be constructed from
any pair fa.`/g and fb.`/g of preferred r-sequences of period L D 2r 1. We define the
set of sequences:
where Z is the shift operator that cyclically shifts a sequence to the left by a position. The
set (3.316) contains L C 2 D 2r C 1 sequences of length L D 2r 1 and is called the set of
Gold sequences. It can be proved [41, 42] that, for the two sequences fa 0 .`/g and fb0 .`/g
238 Chapter 3. Adaptive transversal filters
belonging to the set G.a; b/, the CCS as well as the ACS, with the exception of zero lag,
assume only three values:
8
> r C1 r C1
1 < 1 1 2 2 1C2 2 r odd
ra 0 b0 .n/ D (3.317)
L>: 1 1 2 r C2 r C2
2 1C2 2 rmod 4 D 2
Clearly, the ACS of a Gold sequence no longer has the characteristics of an r-sequence, as
is seen in the next example.
fa.`/gD.1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1/ (3.318)
fb0 .`/gD.1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1/ (3.319)
1
fra .n/gD .31; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
31
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1/
(3.320)
1
frb0 .n/gD .31; 1; 9; 7; 7; 9; 1; 7; 1; 9; 7; 7; 1 1; 7; 1; 1; 7;
31
1; 1; 7; 7; 9; 1; 7; 1; 9; 7; 7; 9; 1/ (3.321)
1
frab0 .n/gD .1; 7; 7; 7; 1; 1; 1; 1; 1; 7; 9; 1; 1; 7; 1; 9; 7;
31
1 9; 7; 7; 9; 1; 7; 1; 9; 1; 1; 1; 9; 1/ (3.322)
3.B. Identification of a FIR system by PN sequences 239
(
D1 n D 0; L ; 2L ; : : :
r p .n/ (3.324)
'0 n D 1; : : : ; L 1; L C 1; : : :
Moreover, we recall that if the input to a time-invariant filter is periodic with period L,
the output will also be periodic with period L. To estimate the impulse response fh i g, i D
0; 1; : : : ; N 1, we consider the scheme illustrated in Figure 3.42, where we choose L ½ N ,
and an input sequence x with length of at least .L C 1/N samples, obtained by repeating
f p.i/g; in other words, x.k/ D p.k/mod L . We assume a delay m 2 f0; 1; : : : ; N 1g, a
rectangular window g Rc .k/ D .1=L/w L .k/, and that the system is started at instant k D 0.
For k ½ .N 1/ C .L 1/, the output v.k/ is given by
Figure 3.42. Correlation method to estimate the impulse response of an unknown system.
240 Chapter 3. Adaptive transversal filters
X
L1
1 X
L1
1
v.k/ D u.k `/ D d.k `/ p Ł .k ` m/mod L
`D0
L `D0
L
"
X
L1
1 X
N 1
D h i p.k ` i/mod L pŁ .k ` m/mod L
`D0
L i D0
#
Cw.k `/ p .k ` m/mod L
Ł
(3.325)
X
N 1
1 X
L1
D hi p.k ` i/mod L pŁ .k ` m/mod L
i D0
L `D0
1 X
L1
C w.k `/ p Ł .k ` m/mod L
L `D0
As
1 X
L1
p.k ` i/mod L pŁ .k ` m/mod L D r p .m i/mod L (3.326)
L `D0
(3.325) becomes
X
N 1
1 X
L1
v.k/ D h i r p .m i/mod L C w.k `/ p Ł .k ` m/mod L (3.327)
i D0
L `D0
If L × 1, the second term on the right-hand side of (3.327) can be ignored, hence observing
(3.324) we get
Mean and variance of the estimate of h m given by (3.327) are obtained as follows.
1. Mean
X
N 1
E[v.k/] D h i r p .m i/mod L (3.329)
i D0
2. Variance
" #
1 X
L1
¦w2
var[v.k/] D var w.k `/ p Ł .k ` m/mod L ' (3.330)
L `D0 L
Figure 3.43. Correlation method via correlator to estimate the impulse response of an
unknown system.
Using the scheme of Figure 3.42, varying m from 0 to N 1 it is possible to get an estimate
of the samples of the impulse response of the unknown system fh i g at the output of the
filter g Rc . However, this scheme has two disadvantages:
1. it requires a very long computation time (N L);
2. it requires synchronization between the two PN sequences, at transmitter and receiver.
Both problems can be resolved by memorizing, after a transient equal to N 1 instants, L
consecutive output samples fd.k/g in a buffer and computing the correlation off-line:
.N 1/C.L1/
X
1
rO dx .m/ D d.k/ pŁ .k m/mod L ' h m m D 0; 1; : : : ; N 1 (3.331)
L kD.N 1/
An alternative scheme is represented in Figure 3.43: with steps analogous to those of the
preceding scheme, we get
1 X
L1
v.k/ D d.k .L 1/ C `/ p Ł .` C .N 1//mod L ' h .k.N 1/.L1//mod L (3.332)
L `D0
jjhjj2
3e D (3.334)
E[jj hjj2 ]
On one hand, we have to take into consideration the noise present in the observed system
and measured by (see Figure 3.42):
Mx jjhjj2
3D (3.335)
¦w2
where Mx is the statistical power of the input signal. In our case Mx D 1. Finally, we refer
to the normalized ratio
3e ¦w2
3n D D (3.336)
3 Mx E[jj hjj2 ]
O
We note that if we indicate with d.k/ the output of the identified system,
X
N 1
O
d.k/ D hO i x.k i/ (3.337)
i D0
X
N 1
O
z.k/ d.k/ D .h i hO i / x.k i/ (3.338)
i D0
having variance Mx E[jj hjj2 ] for a white noise input. As a consequence, (3.336) measures
the ratio between the variance of the additive noise of the observed system and the variance
of the error at the output of the identified system. From (3.338) we note that the difference
O
d.k/ d.k/ D .z.k/ d.k//
O C w.k/ (3.339)
consists of two terms, one due to the estimation error and the other due to the noise of the
system.
X
L1
z.k/ D x.k n/mod L h n (3.340)
nD0
z.k C 1/; : : : ; z.k C .L 1//], and a circulant matrix M whose first row is [x.k/ mod L ;
x.k 1/mod L ; : : : ; x.k .L 1//mod L ]. After an initial transient of L 1 samples, using
the output samples fz.L 1/; : : : ; z.2.L 1//g we obtain a system of L linear equations
in L unknowns, which in matrix notation can be written as
z D Mh (3.341)
Zm D Xm Hm m D 0; : : : ; L 1 (3.343)
Assuming that w is zero-mean white noise with power ¦w2 , mean and variance of the
estimate (3.346) are obtained as follows.
1. Mean
O Dh
E[h] (3.347)
2. Variance
" #
X
L1 X
L1
E[jj hjj2 ] D E jhO k h k j2 D L E[jhO k h k j2 ] D L ¦w2 js.i/j2 (3.348)
kD0 i D0
244 Chapter 3. Adaptive transversal filters
X
L1
1 X
L1
1 X
L1
1
js.i/j2 D jS j j2 D (3.349)
i D0
L jD0 L jD0 jX j j2
X0 D 1
(3.350)
jX1 j2 D jX2 j2 D Ð Ð Ð D jX L1 j2 D L C 1
L C1
3n D (3.351)
2L
For CAZAC sequences, from (3.308), we have that all terms jX j j2 are equal
jX j j2 D L j D 0; 1; : : : ; L 1 (3.352)
and the minimum of (3.348) is equal to ¦w2 , therefore 3n D 1. In other words, if L is large,
CAZAC sequences yield 3 dB improvement with respect to the maximal-length sequences.
Although this method is very simple, it has the disadvantage that, in the best case, it gives
an estimate with variance equal to the noise variance of the original system.
From (3.353) we see that the observation of L samples of the received signal requires
the transmission of L T S D L C N 1 symbols of the training sequence fx.0/; x.1/; : : : ;
x..N 1/ C .L 1//g. The unknown system can be identified using the LS criterion
[43, 44, 45]. For a certain estimate hO of the unknown system, the sum of squared errors at
the output is given by
X
N 1CL1
ED O
jd.k/ d.k/j2
(3.354)
kDN 1
O
d.k/ D hO T x.k/ (3.355)
X
N 1CL1
Ed D jd.k/j2 (3.356)
kDN 1
where
X
N 1CL1
8.i; n/ D x Ł .k i/ x.k n/ (3.358)
kDN 1
3. Cross-correlation vector
where
X
N 1CL1
#.n/ D d.k/ x Ł .k n/ (3.360)
kDN 1
E D Ed hO H ϑ ϑ H hO C hO H hO (3.361)
As the matrix is determined by a suitably chosen training sequence, we can assume that
is positive definite and therefore the inverse exists. The solution to the LS problem yields
hO ls D 1 ϑ (3.362)
Emin D Ed ϑ H hO ls (3.363)
We observe that the matrix 1 in the (3.362) can be pre-computed and memorized, because
it depends only on the training sequence. In some applications it is useful to estimate the
variance of the noise signal w that, observing (3.339), for hO ' h can be assumed equal to
1
.¦O w2 / D Emin (3.364)
L
246 Chapter 3. Adaptive transversal filters
and
hO ls D .I H I/1 I H o (3.368)
X
N 1CL1
ϑ D d.k/ xŁ .k/ (3.369)
kDN 1
X
N 1CL1
ξD w.k/ xŁ .k/ (3.370)
kDN 1
ϑ D h C ξ (3.371)
Consequently, substituting (3.371) in (3.362), the estimation error vector can be expressed as
h D 1 ξ (3.372)
3.B. Identification of a FIR system by PN sequences 247
If w is zero-mean white noise with variance ¦w2 , ξ Ł is a zero-mean random vector with
correlation matrix
In particular,
D LI (3.377)
where I is the N ð N identity matrix. The elements on the diagonal of 1 are equal to
1=L, and (3.376) yields
L
3n D (3.378)
N
The (3.378) gives a good indication of the relation between the number of observations
L, the number of system coefficients N , and 3n . For example, doubling the length of the
training sequence, 3n also doubles. Now, using as training sequence a maximal-length
sequence of periodicity L, and indicating with 1 N ðN the matrix with all elements equal to
1, the correlation matrix can be written as
D .L C 1/I 1 N ðN (3.379)
.L C 1/.L C 1 N /
3n D (3.381)
N .L C 2 N /
Figure 3.44. 3n vs. N for CAZAC sequences (solid line) and maximal-length sequences
(dotted-dashed line), for various values of L.
ž For a given N , choosing L × N , the two sequences yield approximately the same
3n . The worst case is obtained for L D N ; for example, for L D 15 the maximal-
length sequence yields a value of 3n that is about 3 dB lower than the upper bound
(3.378). We note that the frequency method operates for L D N .
ž For a given value of L, because of the presence of the noise w, the estimate of the
coefficients becomes worse if the number of coefficients N is larger than the number
of coefficients of the system Nh . On the other hand, if N is smaller than Nh , the
estimation error may assume large values (see (3.270)).
ž For sparse systems, where the number of coefficients may be large, but only a few
of them are non-zero, the estimate is usually very noisy. Therefore, after obtaining
the estimate, it is necessary to set to zero all coefficients whose amplitude is below
a certain threshold.
1
hO D ϑ
L
where ϑ is given by (3.359). Observing (3.371), we get
1 1
h D I hC ξ (3.382)
L L
3.B. Identification of a FIR system by PN sequences 249
Consequently the estimate is affected by a BIAS term equal to ..1=L/ I/h, and
has a covariance matrix equal to .1=L 2 / Rξ . In particular, using (3.373), it turns out
2
1
¦w2
2
E[jj hjj ] D I h
C tr[] (3.383)
L L2
and
1
3n D 2 (3.384)
1 1 1
tr[] C I h
L 2 L ¦2
w
Using a CAZAC sequence, from (3.377) the second term of the denominator in
(3.384) vanishes, and 3n is given by (3.378). In fact, for a CAZAC sequence, as
(3.324) is strictly true and 1 is diagonal, the LS method (3.362) coincides with
the correlation method (3.331).
1
D .jjhjj2 C .N 2/ jH.0/j2 /
L2
P N 1
where H.0/ D i D0 h i . Moreover, we have
tr[] D N L (3.386)
hence
L
3n D ½ (3.387)
1 jH.0/j2
NC 3 C .N 2/
L ¦w2
jH.0/j2
3 C .N 2/ <L
¦w2
For a known input sequence x, we desire to estimate h using the LMMSE method given
in the Appendix 2.A, from the observation of the noisy output sequence d, which now will
be denoted as o.
We note that in the LS method the observation was the transmitted signal x, while the
desired signal was given by the system noisy output d. The observation is now given by
d and the desired signal is the system impulse response h. Consequently, some caution is
needed to apply (2.229) to the problem under investigation. Recalling the definition (3.366)
of the observation vector o, and (2.229), the LMMSE estimator is given by
hO LMMSE D .R1
o Roh / o ;
T
(3.388)
o D Ih C w (3.390)
Ro D E[oŁ o T ] D I Ł Rh I T C Rw (3.391)
and
If Rw D ¦w2 I, we have
We note that with respect to the LS method (3.368), the LMMSE method (3.395) in-
troduces a weighting of the components given by ϑ D I H o, which depends on the ratio
between the noise variance and the variance of h. If the variance of the components of h
is large, then Rh is also large and likely R1
h can be neglected in (3.395).
We conclude by recalling that Rh is diagonal for a WSSUS radio channel model (see
(4.221)), and the components of h are derived by the power delay profile.
3.B. Identification of a FIR system by PN sequences 251
For an analysis of the estimation error we can refer to (2.233), which uses the error
vector h D hO LMMSE h having a correlation matrix
If Rw D ¦w2 I, we get
Moreover, in general
This result can be compared with that of the LS method given by (3.375).
X
C1
x.t/ D p.i/mod L g.t i Tc / (3.400)
i D0
Z LT
C 2c
1
rx .t/ D LT
x./x Ł . t/d (3.401)
L Tc 2c
Figure 3.45. Basic scheme to measure the impulse response of an unknown system.
252 Chapter 3. Adaptive transversal filters
1 XL1
rx .t/ D r p .`/rg .t `Tc / 0 t L Tc (3.402)
Tc `D0
where, in the case of g.t/ given by (3.399), we have
Z Tc
jtj t
rg .t/ D g./g. t/d D Tc 1 rect (3.403)
0 Tc 2Tc
Substituting (3.403) in (3.402) and assuming a maximal-length PN sequence, with r p .0/ D
1 and r p .`/ D 1=L for ` D 1; : : : ; L 1, we obtain
1 1 jtj t L Tc
rx .t/ D C 1 C 1 rect jtj (3.404)
L L Tc 2Tc 2
as shown in Figure 3.46 for L D 8. If the output z of the unknown system to be identified
is multiplied by a delayed version of the input, x Ł .t − /, and the result is filtered by an
ideal integrator between 0 and L Tc with impulse response
1 1 t L Tc =2
g Rc .t/ D w L Tc .t/ D rect (3.405)
L Tc L Tc L Tc
we obtain
Z t
1
v.t/ D u./d
L Tc tL Tc
Z t Z C1
1
D [h.¾ /x. ¾ /d¾ ]x Ł . − /d (3.406)
L Tc tL Tc 0
Z C1
D h.¾ /rx .− ¾ /d¾ D h Ł rx .− / D v−
0
Figure 3.47. Sliding window method to measure the impulse response of an unknown
system.
Therefore the output assumes a constant value v− equal to the convolution between the
unknown system h and the autocorrelation of x evaluated in − . Assuming 1=Tc is larger
than the maximum frequency of the spectral components of h, and L is sufficiently large,
the output v− is approximately proportional to h.− /. The scheme represented in Figure 3.47
is an alternative to that of Figure 3.45, of simpler implementation because it does not
require synchronization of the two PN sequences at transmitter and receiver. In this latter
scheme, the output z of the unknown system is multiplied by a PN sequence having the
same characteristics of the transmitted sequence, but a different clock frequency f 00 D 1=Tc0 ,
related to the clock frequency f 0 D 1=Tc of the transmitter by the relation
1
f 00 D f 0 1 (3.407)
K
where K is a parameter of the system. We consider the function
Z L Tc
1
rx 0 x .− / D [x 0 ./]Ł x. − / d (3.408)
L Tc 0
where − is the delay at time t D 0 between the two sequences. As time elapses, the delay
between the two sequence diminishes of the quantity .t=Tc0 /.Tc0 Tc / D t=K , so that
Z t
1 t L Tc
[x ./] x. − / d ' rx 0 x −
0 Ł
for t ½ L Tc (3.409)
L Tc tL Tc K
If K is sufficiently large, we can assume that
rx 0 x .− / ' rx .− / (3.410)
At the output of the filter g Rc , given by (3.405), therefore we have
Z t Z C1
1
v.t/ D [h.¾ /x. ¾ / d¾ ][x 0 ./]Ł d
L Tc tL Tc 0
Z C1
t L Tc
D h.¾ /rx x ¾
0 d¾ (3.411)
0 K
Z C1
t L Tc
' h.¾ /rx ¾ d¾
0 K
254 Chapter 3. Adaptive transversal filters
where the integral in (3.412) coincides with the integral in (3.406). If K is sufficiently
large (an increase of K clearly requires a greater precision and hence a greater cost of
the frequency synthesizer to generate f 00 ), it can be shown that the approximations in
(3.409) and (3.410) are valid. Therefore the systems in Figure 3.47 and in Figure 3.45 are
equivalent.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 4
Transmission media
The first two sections of this chapter introduce several parameters that are associated with
the electrical characteristics of electronic devices. The fundamental properties of various
transmission media will be discussed in the remaining sections.
In the cascade of several 2-port networks, the frequency response G Ch . f / of each net-
work is given by the ratio VL =V1 . Therefore, with reference to Figure 4.2c, we have
where G1o . f / and G L . f / denote the frequency responses of the 2-port network and of
the load, respectively. We note that in some cases the frequency response could be defined
as VL =I1 , I L =V1 , or I L =I1 . For these cases, the expression of GCh . f / will be different
from (4.1).
To analyze the characteristics of the network of Figure 4.2a, however, we will refer to
the study of two-terminal devices.
4.1. Electrical characterization of a transmission system 257
assuming that v.t/i.t/ is an ergodic process in mean (see (1.442)). If Pvi is the cross-spectral
density between v and i, we have that:
Z C1 Z C1
PD Pvi . f / d f D Re[Pvi . f /] d f (4.4)
1 1
In fact, from (1.230), the cross-correlation rvi is a real function. Hence Pvi is Hermitian
with even real part and odd imaginary part.
Definition 4.1
The function
p. f / D Re[Pvi . f /] (W/Hz) (4.5)
is called average power density transferred to the load and expresses the average power
per unit of frequency.
We now obtain Pvi in terms of Pvb using the method shown on page 49. Being
1 Zc
I D Vb V D Vb (4.6)
Zb C Zc Zb C Zc
then
Zc
V I Ł D Vb VbŁ (4.7)
jZ b C Z c j2
hence
Zc
Pvi . f / D Pvb . f / (4.8)
jZ b C Z c j2
and
Rc
p. f / D Pvb . f / Rc D Re[Z c ] (4.9)
jZ b C Z c j2
In general, if v is the voltage at the load impedance Z c , the following relation holds
Rc
p. f / D Pv . f / (4.10)
jZ c j2
Definition 4.2
The available power per unit of frequency of an active two-terminal device is defined as
the maximum of (4.9) with respect to Z c and is obtained for Z c D Z bŁ :
Pvb . f /
pd . f / D Rb D Re[Z b ] (4.11)
4Rb
We note that (4.11) is a parameter of the active device that expresses the maximum power
per unit of frequency that can be delivered to the load.
Active two-terminal device as a current generator. For the circuit of Figure 4.4, that
consists of a current source with admittance Yb D G b C j Bb and a load with Yc D YbŁ , the
available power per unit of frequency is given by
Pib . f /
pd . f / D (4.12)
4G b
where G b D Re[Yb ], and Pib (A2 /Hz) is the PSD of the signal i b .
We have a simple relation between the circuit of Figure 4.4 and that of Figure 4.3 for
Z b D Rb C j X b ; from Norton theorem we get
1 Rb Xb
Yb D D j (4.13)
Zb jZ b j2 jZ b j2
and
1
Ib D Vb (4.14)
Zb
and
Vo . f /
G1o . f / D (4.19)
V1 . f /
The conditions for the absence of distortion between vi and v L are verified if
VL
G. f / D D Gi . f /G1o . f /G L . f / (4.20)
Vi
is a constant in the passband of vi . Note that also in this case the presence of a constant
delay factor is possible.
Let pi . f / and po . f / be the average power densities of source and load, respectively:
R1 R1
pi . f / D Pv1 . f / 2
D Pvi . f / (4.21)
jZ 1 j jZ 1 C Z i j2
and
RL RL
po . f / D Pv L . f / 2
D Pvo . f / (4.22)
jZ L j jZ L C Z 2 j2
Definition 4.3
The network power gain is defined as the ratio
po . f /
g. f / D (4.23)
pi . f /
Using the expressions of the various frequency responses, and observing (see Figure 4.2c)
Pv L . f / D jGCh . f /j2 Pv1 . f / (4.24)
we get
RL 1 jZ 1 j2 R L jZ 1 j2
g. f / D Pv L . f / D jGCh . f /j2 (4.25)
jZ L j2 Pv1 . f / R1 R1 jZ L j2
In the presence of match for maximum transfer of power only at the source, we introduce
the notion of transducer gain, defined as
po . f /
gt . f / D (4.26)
pi;d . f /
where pi;d . f / is the available power per unit of frequency at the source.
Definition 4.4
A 2-port network is said to be perfectly matched if the conditions for maximum transfer
of power are established at the source as well as at the load:
Z 1 D Z iŁ and Z L D Z 2Ł (4.27)
4.1. Electrical characterization of a transmission system 261
Definition 4.5
The available power gain is defined as the ratio between po;d . f / and pi;d . f /,
po;d . f /
gd . f / D (4.30)
pi;d . f /
In particular for Z 1 D Z 2 , that is when input and output network impedances coincide,
from (4.25) we get
gd . f / D jGCh . f /j2 (4.31)
If in the passband B of vi we have gd > 1 the network is said to be active. If instead
gd < 1 then the network is passive; in this case, we speak of available attenuation of the
network :
1
ad D (4.32)
gd
In dB,
.gd /dB D 10 log10 gd (4.33)
and
Definition 4.6
Apart from a possible delay, for an ideal distortionless network with power gain (attenua-
tion) gd .ad /, we will assume that the frequency response of the network is
GCh . f / D Go constant f 2B (4.35)
Consequently, the impulse response is given by
gCh .t/ D Go Ž.t/ (4.36)
where, observing (4.31),
p 1
Go D gd D p (4.37)
ad
We note that, in case the conditions leading to (4.31) are not verified, the relation between
Go and gd is more complicated (see (4.25)).
262 Chapter 4. Transmission media
Example 4.1.1
For P D 0:5 W we have .P/dBW D 3 dBW, and .P/dBm D 27 dBm.
With reference to (4.37), we note that .Go /d B D .gd /d B . In fact, as Go denotes a ratio
of voltages, it follows that
.Go /d B D 20 log10 Go D 10 log10 gd D .gd /d B (4.42)
For telephone signals, a further power unit is given by dBrnc, which expresses the power
in dBrn of a signal filtered according to the mask given in Figure 4.5 [2]. The filter reflects
the perception of the human ear and is known as C-message weighting.
Figure 4.5. Frequency weighting known as C-message weighting. [ c 1982 Bell Telephone
Laboratories. Reproduced with permission of Lucent Technologies, Inc./Bell Labs.]
4.2. Noise generated by electrical devices and networks 263
Thermal noise
Thermal noise is a phenomenon associated with Brownian or random motion of electrons
in a conductor. As each electron carries a unit charge, its motion between collisions with
atoms produces a short impulse of current. Actually, if we represent the motion of an
electron within a conductor in a two-dimensional plane, the typical behavior is represented
in Figure 4.6a where the changes in the direction of the electron motion are determined by
random collisions with atoms at the set of instants ftk g. Between two consecutive collisions
the electron produces a current that is proportional to the projection of the velocity onto
the axis of the conductor. For example, the behavior of instantaneous current for the path
of Figure 4.6a is illustrated in Figure 4.6b. Although the average value (DC component) is
zero, the large number of electrons and collisions gives origin to a measurable alternating
component. If a current flows through the conductor, an orderly motion is superimposed
on the disorderly motion of electrons; the sources of the two motions do not interact with
each other. For a conductor of resistance R, at an absolute temperature of T Kelvin, the
power spectral density of the open circuit voltage w at the conductor terminals is given by
Pw . f / D 2kTR . f / (4.43)
where k D 1:3805 Ð 1023 J/K is the Boltzmann constant and
hf 1
hf
. f/ D e kT 1 (4.44)
kT
Figure 4.6. Representation of electron motion and current produced by the motion.
264 Chapter 4. Transmission media
where h D 6:6262 Ð 1034 Js is the Planck constant. We note that, for f − kT= h D
6 Ð 1012 Hz (at room temperature T D 290 K), we get . f / ' 1. Therefore the PSD of w
is approximately white, i.e.
Pw . f / D 2kTR (4.45)
We adopt the electrical model of Figure 4.7, where a conductor is modelled as a noiseless
device having in series a generator of noise voltage w.2 Because at each instant the noise
voltage w.t/ is due to the superposition of several current pulses, a suitable model for
the amplitude distribution of w.t/ is the Gaussian distribution with zero mean. Note that
the variance is very large, because of the wide support of Pw . In the case of a linear
two-terminal device with impedance Z D R C j X at absolute temperature T, the spectral
density of the open circuit voltage w is still given by (4.43), where R D Re[Z ]. In other
words, only the resistive component of the impedance gives origin to thermal noise. Let us
consider the scheme of Figure 4.8, where a noisy impedance Z D R C j X is matched to
the load for maximum transfer of power. Observing (4.11), the available noise power per
Figure 4.8. Electrical circuit to measure the available source noise power to the load.
2 An equivalent model assumes a noiseless conductor in parallel to a generator of noise current j .t/ with PSD
P j . f / D 2kT R1 . f /.
4.2. Noise generated by electrical devices and networks 265
Shot noise
Most devices are affected by shot noise, which is due to the discrete nature of electron
flow: also in this case the noise represents the instantaneous random deviation of current
or voltage from the average value. Shot noise, expressed as a current signal, can also be
modelled as Gaussian noise with a constant PSD given by
Pishot . f / D eI (A2 /Hz) (4.51)
where e D 1:6 Ð 1019 C is the electron charge and I is the average current that flows
through the device; in this case it is convenient to use the electrical model of Figure 4.4.
In other words, Tw represents the absolute temperature that a thermal noise source should
have in order to produce the same available noise power as the device. This concept can be
extended and applied to the output of an amplifier or an antenna, expressing the noise power
in terms of effective noise temperature. We note that if a device at absolute temperature T
contains more than one noise source, then Tw > T.
Figure 4.9. Noise source connected to a noisy 2-port network: three equivalent models.
4.2. Noise generated by electrical devices and networks 267
signal given by wo D wo.S/ C wo.A/ . Assuming the two noise signals wo.S/ and wo.A/
are uncorrelated, the available power at the load will be equal to the sum of the two
powers, i.e.
kT S
p wo . f / D g d . f / C pwo.A/ . f / (4.54)
2
Definition 4.7
The effective noise temperature T A of the 2-port network is defined as
pwo.A/ . f /
TA. f / D (4.55)
gd . f / k2
and denotes the temperature of a thermal noise source connected to a 2-port noiseless
network that produces the same output noise power. Then (4.54) becomes:3
k
pwo . f / D gd . f / [T S C T A ] (4.56)
2
Definition 4.8
The effective input temperature of a system consisting of a source connected to a 2-port
network is
Twi D T S C T A (4.57)
Definition 4.9
The effective output temperature of a system consisting of a source connected to a 2-port
network is
Then
k
p wo . f / D Tw (4.59)
2 o
Equivalent-noise models
By the previous considerations, we introduce the equivalent circuits illustrated in Figure 4.9b
and 4.9c. In particular the scheme of Figure 4.9b assumes the network to be noiseless and
an equivalent noise source is considered at the input. The scheme of Figure 4.9c, on the
other hand, considers all noise sources at the output. The effects on the load for the three
schemes of Figure 4.9 are the same.
3 To simplify the notation we have omitted indicating the dependency on frequency of all noise temperatures.
Note that the dependency on frequency of T A and T S is determined by gd . f /, pwo.A/ . f /, and pwi.S/ . f /.
268 Chapter 4. Transmission media
p wo . f /
F. f / D
pwo.S0 / . f /
(4.61)
pwo.A/ . f /
D1C
pwo.S0 / . f /
Being pwo.S0 / . f / D gd . f / kT2 0 , and substituting for pwo.A/ the expression (4.55), we obtain
the important relation
TA
F. f / D 1 C (4.62)
T0
We note that F is always greater than 1 and it equals 1 in the ideal case of a noiseless
2-port network. Moreover, F is a parameter of the network and does not depend on the
noise temperature of the source to which it is connected. From (4.61) the noise power of
the 2-port network can be expressed as
k
pwo.A/ . f / D .F 1/T0 gd (4.63)
2
From the above considerations we deduce that to describe the noise of an active 2-port
network, we must assign the gain gd and the noise figure F (or equivalently the noise
temperature T A ). We now see that for a passive network at temperature T0 , it is sufficient
to assign only one of the two parameters. Let us consider a passive network at temperature
T0 , as for example a transmission line, for which gd < 1. To determine the noise figure
let us assume as source an impedance, which is matched to the network for maximum
transfer of power, at temperature T0 . Applying Thevenin theorem to the network output,
the system is equivalent to a two-terminal device with impedance Z 2 at temperature T0 .
4 Given an electrical circuit, a useful relation to determine F, equivalent to (4.61), that employs the PSDs of the
output noise signals, is given by (see (4.9))
Pwo . f / Pwo.A/ . f /
F. f / D D1C (4.60)
Pw 0 . f / Pw 0 . f /
o.S / o.S /
4.2. Noise generated by electrical devices and networks 269
Assuming the load is matched for maximum transfer of power, i.e. Z 2 D Z ŁL , from (4.46)
at the output we have pw0 . f / D .kT0 =2/. On the other hand, pwi.S0 / . f / D .kT0 =2/, and
pw0.S0 / . f / D gd pwi.S0 / .
Hence from the first of (4.61) we have
1
F. f / D D ad (4.64)
gd
where ad is the power attenuation of the network. Note that also in this case, given F,
we can determine the effective noise temperature of the network, T A , according to (4.62).
Summarizing, in a connection between a source and a 2-port network, the effective input
temperature of the system can be expressed as
kTwi
p w0 . f / D g d . f / (4.66)
2
Example 4.2.1
Let us consider the configuration of Figure 4.10a, where an antenna with noise temperature
T S is connected to a pre-amplifier with available power gain g and noise figure F. An
electrical model of the connection is given in Figure 4.10b, where the antenna is modelled
as a resistance with noise temperature T S . If the impedances of the two devices are matched,
Twi is given by (4.65).
(a) (b)
g D g1 g2 (4.67)
With regard to the noise characteristics, it is sufficient to determine the noise figure of
the cascade of the two networks. For a source at room temperature T0 , from (4.66) for
T S D T0 , the noise power at the output of the first network is given by
kT0
pwo;1 . f / D F1 g1 (4.68)
2
At the output of the second network we have
k
pwo;2 . f / D pwo;1 . f /g2 C .F2 1/T0 g2 (4.69)
2
using (4.63) to express the noise power due to the second network only. Then the noise
figure of the overall network is given by
kT0 kT0
pwo;2 . f / g1 g2 F1 C .F2 1/g2
FD D 2 2 (4.70)
pwo;2.S0 / . f / kT0
g1 g2
2
Simplifying (4.70) we get
.F2 1/
F D F1 C (4.71)
g1
Extending this result to the cascade of N 2-port networks, Ai , i D 1; : : : ; N , characterized
by gains gi and noise figures Fi , we obtain Friis formula of the total noise figure
.F2 1/ .F3 1/ .F N 1/
F D F1 C C C ÐÐÐC (4.72)
g1 g1 g2 g1 g2 : : : g N 1
We observe that F strongly depends on the gain and noise figure parameters of the first
stages; in particular, the smaller F1 and the larger g1 , the more F will be reduced. Substitut-
ing (4.62), that relates the noise figure to the effective noise temperature, in (4.72), we have
that the equivalent noise temperature of the cascade of N 2-port networks, characterized
by noise temperatures T Ai , i D 1; : : : ; N , is given by
T A2 T A3 T AN
T A D T A1 C C C ÐÐÐ C (4.73)
g1 g1 g2 g1 g2 : : : g N 1
g D g1 g2 : : : g N (4.74)
Example 4.2.2
The idealized configuration of a transmission medium consisting of a very long cable where
amplifiers are inserted at equally spaced points is illustrated in Figure 4.12. Each section of
the cable, with power attenuation ac and noise figure Fc D ac (see (4.64)), cascaded with
an amplifier, with gain g A and noise figure F A , is called a repeater section. To compensate
for the attenuation of the cable we choose g A D ac . Then, each section has a gain
1
gsr D gA D 1 (4.75)
ac
FA 1
Fsr D Fc C D ac C ac .F A 1/ D g A F A (4.76)
gc
Therefore the N sections have overall unit gain and noise figure
.Fsr 1/ .Fsr 1/
F D Fsr C C ÐÐÐ C
gsr gsr gsr : : : gsr
(4.77)
D N .Fsr 1/ C 1
' N Fsr
where Fsr is given by (4.76). We note that the output noise power of N repeater sections
is N times the noise power introduced by an individual section.
and
Z
P wo D 2 pwi . f /g. f / d f (4.90)
B
Assuming now that g. f / is constant within B, we have
k
Pso D Ps g and Pwo D Tw g2B (4.91)
2 i
assuming that also the source is matched for maximum transfer of power. Finally we get
E[so2 .t/] Ps
3out D D (4.92)
E[wo .t/]
2 kTwi B
where Ps is the available power of the desired signal at the network input, and Twi D
T S C .F 1/T0 is the effective noise temperature including both the source and the 2-port
network. With reference to the above configuration, we observe that the power of wi could
be very high if Twi is constant over a wide band, but wo has much smaller power since its
passband coincides with that of the network frequency response. From (4.91) and (4.50),
the effective input noise due to the connection source-network has an average power for
T S D T0 .Twi D FT0 / equal to
.Pwi /dBm D 114 C 10 log10 B jMHz C.F/d B .T S D T0 / (4.93)
Example 4.3.1
A station, receiving signals from a satellite, has an antenna with gain gant of 40 dB and a
noise temperature T S of 60 K (that is the antenna acts as a noisy resistor at a temperature
of 60 K). The antenna feeds a preamplifier with a noise temperature T A1 of 125 K and a
gain g1 of 20 dB. The preamplifier is followed by an amplifier with a noise figure F2 of
10 dB and a gain g2 of 80 dB. The transmitted signal bandwidth is 1 MHz. The satellite
has an antenna with a power gain of gsat D 6 dB and the total attenuation a` due to the
distance between transmitter and receiver is 190 dB. We want to find:
2. the minimum power of the signal transmitted by the satellite to obtain a SNR of
20 dB at the receiver output.
The two receiver amplifiers can be modelled as one amplifier with gain:
2. From 3out D .Pso =Pwo / ½ 20 dB we get .Pso =Pwo / ½ 100. As Pso D Ps gsat .1=a` /
gant g A D Ps 1044=10 , it follows
Ps ½ 73 W (4.98)
A more useful relation is obtained assuming that g. f / is a constant within the passband
B of the network. Given the average power of the noise generated by the source at room
temperature
kT0
Pwi.S0 / D 2B (4.100)
2
4.4. Transmission lines 275
and
Ps
3in D (4.101)
Pwi.S0 /
ð i dx
i rdx ldx i+
ðx
v ð v dx
v+
ðx
gdx cdx
x denote the distance from the origin and L be the length of the line. The termination is
found at distance x D 0 and the signal source at x D L. Let v D v.x; t/ and i D i.x; t/
be, respectively, the voltage and current at distance x at time t. To determine the law that
establishes the voltage and current along the line, let us consider a uniform line segment
of infinitesimal length that we assume to be time invariant, depicted in Figure 4.14. The
parameters r; `; g; c are known as primary constants of the line. They define, respectively,
resistance, inductance, conductance and capacitance of the line per unit length. Primary
constants are in general slowly time-varying functions of the frequency; however, in this
context, they will be considered time invariant. The model of Figure 4.14 is obtained using
the first order Taylor series expansion of v.x; t/ and i.x; t/ as a function of distance x.
Substituting @ 2 i=@t@ x in the first equation with the expression obtained from the second,
we get the wave equation
@ 2v @ 2v 1 @ 2v
D `c D (4.105)
@x2 @t 2 ¹ 2 @t 2
4.4. Transmission lines 277
p
where ¹ D 1= `c represents the velocity of propagation of the signal on a lossless trans-
mission line. The general solution to the wave equation for a lossless transmission line is
given by
x x
v.x; t/ D '1 t C '2 t C (4.106)
¹ ¹
where '1 and '2 are arbitrary functions. Noting that from (4.103) @i =@t D .1=`/@v=@ x,
(4.106) yields
@i 1 x 1 0 x
` D '10 t C '2 t C (4.107)
@t ¹ ¹ ¹ ¹
where '10 and '20 are the derivatives of '1 and '2 , respectively.
Integrating by parts (4.107) we get
1 h x x i
i.x; t/ D '1 t '2 t C C '.x/ (4.108)
`¹ ¹ ¹
where '.x/ is time independent and can therefore be ignored in the study of propagation.
Defining the characteristic impedance of a lossless transmission line as
r
`
Z 0 D `¹ D (4.109)
c
the expression for the current is given by
1 h x x i
i.x; t/ D '1 t '2 t C (4.110)
Z0 ¹ ¹
From the general solution to the wave equation we find that the voltage (or the current),
considered as a function of distance along the line, consists of two waves that propagate
in opposite directions: the wave that propagates from the source to the line termination is
called the source or incident wave, that which propagates in the opposite direction is called
reflected wave. We consider now the propagation of a sinusoidal wave with frequency
f D !=2³ in an ideal transmission line. The voltage at distance x D 0 is given by
The wave propagating in the positive direction of x is given by vC .x; t/ D jVC j cos[!.t
x=¹/], that propagating in the negative direction is given by v .x; t/ D jV j cos[!.t C
x=¹/ C p ]. The transmission line voltage is obtained as the sum of the two components
and is given by
h x i h x i
v.x; t/ D jVC j cos ! t C jV j cos ! t C C p (4.112)
¹ ¹
The current has the expression
jVC j h x i jV j h x i
i.x; t/ D cos ! t cos ! t C C p (4.113)
Z0 ¹ Z0 ¹
278 Chapter 4. Transmission media
Let us consider a point on the x-axis individuated at each time instant t by the condition
that the argument of the function F.t x=¹/ is a constant. This point is seen by an observer
as moving at velocity ¹ in the positive direction of the x-axis. For sinuosoidal waves the
velocity for which the phase is a constant is called phase velocity ¹. It is useful to write
(4.112) and (4.113) in complex notation, where the phasors V and I represent amplitude
and phase at distance x of the sinusoidal signals (4.112) and (4.113), respectively,
and
2Z L
−D (4.119)
Z L C Z0
At the termination, defining the incident power as PC D jVC j2 =Z 0 and the reflected power
as P D jV j2 =Z 0 , we obtain P =PC D j%j2 ; the ratio between the power delivered to the
load and the incident power is hence given by 1 j%j2 . Let us consider some specific cases:
d2V
D 2V (4.121)
dx 2
where
p
D ZY (4.122)
is a characteristic constant of the transmission line called propagation constant. Let Þ and þ
be, respectively, the real and imaginary parts of : Þ is the attenuation constant measured
in neper per unit of length, and þ is the phase constant measured in radians per unit of
length. The solution of the differential equation for the voltage can be expressed in terms
of exponential functions as
1 Ð
I D VC e x V e x (4.124)
Z0
where
r
Z
Z0 D (4.125)
Y
is the characteristic impedance of the transmission line. The propagation constant and the
characteristic impedance are also known as secondary constants of the transmission line.
Frequency response
Let us consider the transmission line of Figure 4.15, with a sinusoidal voltage source vi
and a load Z L . From (4.123) the voltage at the load can be expressed as VL D VC .1 C %/.
Recalling that V = VC D %, we define the voltage Vo D VL j Z LD1 D VC .1 C %/ j%D1 D 2VC .
280 Chapter 4. Transmission media
i(t)
1
Z
i
v 1 (t) v L(t) ZL
v(t)
i
x = -L x=0
Figure 4.15. Transmission line with sinusoidal voltage generator vi and load ZL .
V1 1 C %e2 L
Z1 D D Z0 (4.127)
I1 1 %e2 L
Vooc VC .1 C %/ j%D1
Z2 D D V D Z0 (4.128)
Z 0 .1 %/ j%D1
I L sc C
Vo 2e L
G1o D D (4.130)
V1 1 C %e2 L
V1 Z1 Z 0 .1 C %e2 L /
Gi D D D (4.131)
Vi Zi C Z1 Z 0 .1 C %e2 L / C Z i .1 %e2 L /
4.4. Transmission lines 281
To determine the power gain of the network, we can use the general equation (4.25), or
observe (4.23); in any case, we obtain
1 j%j2
g. f / D e2ÞL (4.136)
1 j%j2 e4ÞL
where Þ D Re[ ].
We note that, for a matched transmission line, the available attenuation is given by
1
ad . f / D D e2ÞL (4.137)
je L j2
In (4.137), Þ expresses the attenuation in neper per unit of length. Alternatively, one can
introduce an attenuation in dB per unit of length, .aQ d . f //d B , as
1
ad . f / D 10 10 .aQ d . f //d B L (4.138)
The relation between Þ and .aQ d . f //d B is given by
.aQ d . f //d B D 8:68Þ (4.139)
From (4.139), the attentuation in dB introduced by the transmission line is equal to
.ad . f //d B D .aQ d . f //d B L (4.140)
In a transmission line with a non-matched resistive load that satisfies the condition Z L − Z 0 ,
from (4.118) we get 1 C % ' 2Z L =Z 0 , %2 e4ÞL ' 0, and %2 ' 1 4Z L =Z 0 . Therefore
(4.136) yields
þ þ
þ ZL þ
.ad . f //d B D .aQ d . f //d B L 10 log10 4 þþ þ (4.141)
Z0 þ
282 Chapter 4. Transmission media
r c D g` (4.142)
and
r ( 1=2 )1=2
`c r2
þD! 1C 2 2 C1 (4.144)
2 ! `
We note that, given the value of Þ. f / at a certain frequency f D f 0 , we can obtain the
value of K . Therefore it is possible to determine the attenuation constant at every other
frequency. From the expression (4.133) of the frequency response
p of a matched transmission
line,
p with given by (4.147), without considering the delay `c introduced by the term
j! `c, the impulse response has the following expression
K L .K L/2
gCh .t/ D p e 4t 1.t/ (4.150)
2 ³t3
The pulse signal gCh is shown in Figure 4.16 for various values of the product K L. We
note a larger dispersion of gCh for increasing values of K L.
0.12
KL=2
0.1
0.08
gCh(t)
0.06
KL=3
0.04
KL=4
0.02 KL=5
KL=6
0
0 2 4 6 8 10 12 14 16
t (s)
Figure 4.16. Impulse response of a matched transmission line for various values of KL.
284 Chapter 4. Transmission media
Figure 4.17. Attenuation as a function of frequency for some telephone transmission lines:
three are polyethylene-insulated cables (PIC) and one is a coaxial cable with a diameter
of 9.525 mm. [c 1982 Bell Telephone Laboratories. Reproduced with permission of Lucent
Technologies, Inc./Bell Labs.]
4.4. Transmission lines 285
albeit with a different constant of proportionality. In any case in the local-loop,5 to force
the primary constants to satisfy Heaviside conditions in the voice band, which goes from
300 to 3400 Hz, formerly some lump inductors were placed at equidistant points along the
transmission line. This procedure, called inductive loading, causes Þ. f / to be flat in the
voice band, but considerably increases the attenuation outside of the voice band. Moreover,
the phase þ. f / may result very distorted in the passband. Typical behavior of Þ and þ
in the frequency band 0 ł 4000 Hz, with and without loading, are given in Figure 4.18
for a transmission line with gauge 22 [2]. The digital subscriber line (DSL) technologies,
introduced for data transmission in the local loop, require a bandwidth much greater than
4 kHz, up to about 20 MHz for the VDSL technology (see Chapter 17). For DSL appli-
cations it is therefore necessary to remove possible loading coils that are present in the
local loops. The frequency response of a DSL transmission line can also be modified by
the presence of one or more bridged-taps. A bridged-tap consists of a twisted pair cable
of a certain length L BT , terminated by an open circuit and connected in parallel to a lo-
cal loop. At the connection point, the incident signal separates into two components. The
component propagating along the bridged-tap is reflected at the point of the open circuit:
the component propagating on the transmission line must therefore be calculated taking
also into consideration this reflected component. At the frequencies f BT D ¹=½ BT , where
½ BT satisfies the condition .2n C 1/½ BT =4 D L BT , n D 0; 1; : : : , at the connection point
we get destructive interference between the reflected and incident component: this inter-
ference reveals itself as a notch in the frequency response of the transmission line. Given
Figure 4.18. Attenuation constant Þ and phase constant þ for a telephone transmission line
c 1982 Bell Telephone Laboratories. Reproduced with permission
with and without loading. [
of Lucent Technologies, Inc./Bell Labs.]
5 By local-loop we intend the transmission line that goes from the user telephone set to the central office.
286 Chapter 4. Transmission media
the large number of transmission lines actually in use, to evaluate the performance of DSL
systems we usually refer to a limited number of loop characteristics, which can be viewed
as samples taken from the ensemble of frequency responses. On the other hand, the trans-
mission characteristics of unshielded twisted-pair (UTP) cables commonly used for data
transmission over local area networks are defined by the EIA/TIA and ISO/IEC standards.
As illustrated in Table 4.3, the cables are divided into different categories according to the
values of 1) the signal attenuation per unit of length, 2) the attenuation of the near-end
cross-talk signal, or NEXT, that will be defined in the next section, and 3) the characteristic
impedance. Cables of category three (UTP-3) are commonly called voice-grade, those of
categories four and five (UTP-4 and UTP-5) are data-grade. We note that the signal atten-
uation and the intensity of NEXT are substantially larger for UTP-3 cables than for UTP-4
and UTP-5 cables.
4.4.2 Cross-talk
The interference signal that is commonly referred to as cross-talk is determined by mag-
netic coupling and unbalanced capacitance between two adjacent transmission lines. Let us
consider the two transmission lines of Figure 4.19, where the terminals .1; 10 / belong to the
disturbing transmission line and the terminals .2; 20 / belong to the disturbed transmission
line. In the study of the interference signal produced by magnetic coupling, we consider
i1
1
v1
Z0
1’
im m
2
Z0 Z0
2’
1
1
c 11Ȁ c 12 c 12Ȁ
v1 Z0 Z0
c 12 c 12Ȁ
2 ic c 22Ȁ 2Ȁ
1Ȁ
c 1Ȁ2 v1 Z 0 c 11Ȁ
2
c 1Ȁ2Ȁ
Z0 Z0 Z0
c 22Ȁ c 1Ȁ2 c 1Ȁ2Ȁ
2Ȁ
1Ȁ
(a) (b)
the circuit of Figure 4.20. We will assume that the length of the transmission line is much
longer than the wavelength corresponding to the maximum transmitted frequency and that
the impedance Z 0 is much higher than the inductor reactance. The induced electromagnetic
force (EMF) is given by E D j2³ f m I1 , where I1 ' V1 =Z 0 . The EMF produces a current
j2³ f m
E
Im D .1=.2Z 0 //
D .1=.2Z 0 //
I1 , that can be expressed as Im D j2³ f m2 V1 .
.1=.2Z 0 //
To study the interference signal due to unbalanced capacitance, we consider the circuit
of Figure 4.21a, that can be redrawn in an equivalent way as illustrated in Figure 4.21b.
We assume that the impedance Z 0 is much smaller than the reactance of the capacitors that
can be found on the bridge. Applying the principle of the equivalent generator we find
0 1
1 1
V220 j Ic D0 B c10 20 c10 2 C 1
Ic D D B
@ C
A j2³ f V1
Z 220 1 1 1 1 1 1
C C C
c10 20 c120 c10 2 c12 c12 C c10 2 c120 C c10 20
(4.151)
288 Chapter 4. Transmission media
Figure 4.22. Illustration of near-end cross-talk (NEXT) and far-end cross-talk (FEXT) signals.
Recalling that the current Ic is equally divided between the impedances Z 0 on which the
transmission line terminates, we find that the cross-talk current produced at the transmitter
side termination is I p D Im C Ic =2, and the cross-talk current produced at the receiver
side termination is It D Im C Ic =2. As illustrated in Figure 4.22, the interference signals are
called near-end cross-talk or NEXT, or far-end cross-talk or FEXT, depending on whether
the receiver side of the disturbed line is the same as the transmitter side of the disturbing
line, or the opposite side, respectively. We now evaluate the total contribution of the near
and far-end cross-talk signals for lines with distributed impedances.
Near-end cross-talk
Let
m.x/ 1c.x/
a p .x/ D C Z0 (4.153)
2Z 0 2
be the near-end cross-talk coupling function at distance x from the origin. In complex
notation, the NEXT signal is expressed as
Z L
Vp D Z0 I p D V1 e2 x j2³ f a p .x/ dx (4.154)
0
To calculate the power spectral density of NEXT we need to know the autocorrelation
function of the random process a p .x/. A model commonly used in practice assumes that
a p .x/ is a white stationary random process, with autocorrelation
³ 3=2 p
r p .0/ f 3=2 .1 e4K ³ f L / ' E[jV1 . f /j2 ]k p f 3=2
E[jV p . f /j2 ] D E[jV1 . f /j2 ]
K
(4.156)
where K is defined by (4.148), and
³ 3=2 r p .0/
kp D (4.157)
K
Using (1.449), the level of NEXT coupling is given by6
E[jV p . f /j2 ]
jG p . f /j2 D ' k p f 3=2 (4.158)
E[jV1 . f /j2 ]
To perform computer simulations of data transmission systems over metallic lines in the
presence of NEXT, it is required to characterize not only the amplitude, but also the
phase of NEXT coupling. In addition to experimental models obtained through laboratory
measurements, the following stochastic model is used:
L
1x 1
X
a p .x/ D ai w1x .x i1x/ (4.159)
i D0
with
(
1 if x 2 [0; 1x/
w1x .x/ D (4.160)
0 otherwise
If we know the parameters of the transmission line K and k p , then from (4.157) and (4.161)
the variance of ai to be used in the simulations is given by
K kp
E[ai2 ] D (4.163)
³ 3=2 1x
6 Observing (1.449), jG p . f /j2 is also equal to the ratio between the PSDs of v p and v1 .
290 Chapter 4. Transmission media
Far-end cross-talk
Let
m.x/ 1c.x/
at .x/ D C Z0 (4.164)
2Z 0 2
be the far-end cross-talk coupling function at distance x from the origin. In complex nota-
tion, the FEXT signal is given by
Z L
Vt D Z 0 It D V1 e L j2³ f at .x/ dx (4.165)
0
Analogously to the case of NEXT, we assume that at is a white stationary random process,
with autocorrelation
where L is the length of the transmission line. The level of FEXT coupling is given by
E[jVt . f /j2 ] p
jGt . f /j2 D D kt f 2 Le2K ³ f L (4.168)
E[jV1 . f /j ]
2
Example 4.4.1
For local-area network (LAN) applications, the maximum length of cables connecting sta-
tions is typically limited to 100 m. Deviations from the characteristic expressed by (4.147)
may be caused by losses in the dielectric material of the cable, the presence of connectors,
non-homogeneity of the transmission line, etc.
For the IEEE Standard 100BASE-T2, which defines the physical layer for data trans-
mission at 100 Mb/s over UTP-3 cables in Ethernet LANs (see Chapter 17), the following
worst-case frequency response is considered:
1:2 p
j f C0:00028 f /L
GCh . f / D 10 20 e.0:00385 (4.169)
p
where f is expressed in MHz and L in meters. In (4.169), the term e j2³ f `cL is ignored,
as it indicates a constant propagation delay. A frequency independent attenuation of 1.2 dB
has been included to take into account the attenuation caused by the possible presence of
connectors. The amplitude of the frequency response obtained for a cable length L D 100 m
is shown in Figure 4.23 [4]. We note that the signal attenuation at the frequency of 16 MHz
is equal to 14.6 dB, a higher value than that indicated in Table 4.3 for UTP-3 cables.
4.5. Optical fibers 291
0
Amplitude characteristic
for 100 m cable length 16 MHz
NEXT coupling envelope curve
–10
–21 + 15 log10 ( f/16 ) , f in MHz
–14.6 dB
–20 –21.0 dB
(dB)
–30
Amplitude
–40
–60
0 5 10 15 20 25 30 35 40
f (MHz)
Figure 4.23. Amplitude of the frequency response for a voice-grade twisted-pair cable with
c 1997 IEEE.]
length equal to 100 m, and four realizations of NEXT coupling function. [
The level of NEXT coupling (4.158) is illustrated in Figure 4.23 as a dotted line; we
note the increase as a function of frequency of 15 dB/decade, due to the factor f 3=2 . The
level of NEXT coupling equal to 21 dB at the frequency of 16 MHz is larger than that
given in Table 4.3 for UTP-3 cables. The amplitude characteristics of four realizations of
the NEXT coupling function (4.162) are also shown in Figure 4.23.
Figure 4.24. Attenuation curve as a function of wavelength for an optical fiber. [From Li
c 1980 IEEE.]
(1980), see also Miya et al. (1979),
to a bandwidth of 2 Ð 1014 Hz. Three regions are typically used for transmission: the first
window goes from 800 to 900 nm, the second from 1250 to 1350 nm, and the third from
1500 to 1600 nm.
We immediately realize the enormous capacity of fiber transmission systems: for example,
a system that uses only 1% of the 2 Ð 1014 Hz bandwidth mentioned above, has an avail-
able bandwidth of 2 Ð 1012 Hz, equivalent to that needed for the transmission of ¾300:000
television signals, each with a bandwidth of 6 MHz. To efficiently use the band in the
optical spectrum, multiplexing techniques using optical devices have been developed, such
as wavelength-division multiplexing (WDM) and optical frequency-division multiplexing
(O-FDM). Moreover, we note that, although the propagation of electromagnetic fields in
the atmosphere at these frequencies is also considered for transmission (see Section 17.2.1),
the majority of optical communication systems employ as transmission medium an optical
fiber, which acts as a waveguide. A fundamental device in optical communications is rep-
resented by the laser, which, beginning in the 1970s, made coherent light sources available
for the transmission of signals.
1− D .M C Mg / L 1½ (4.170)
where M is the dispersion coefficient of the material, Mg is the dispersion coefficient related
to the geometry of the waveguide, L denotes the length of the fiber and 1½ denotes the
spectral width of the light source. The total dispersion .M C Mg / has values near 120, 0,
and 15 ps/(nmðkm) at wavelengths of 850, 1300, and 1550 nm, respectively.
The bandwidth of the transmission medium is inversely proportional to the dispersion;
we note that the dispersion is minimum in the second window, with values near zero
around the wavelength of 1300 nm for conventional fibers. Special fibers are designed to
compensate for the dispersion introduced by the material; because of the low attenuation
and dispersion, these fibers are normally used in very long distance connections.
Multimode fibers allow the propagation of more than one mode of the electromagnetic
field. In this case the medium introduces signal distortion caused by the fact that propagation
of energy for different modes has different speeds: for this reason multimodal fibers are
used in applications where the transmission bandwidth and the length of the transmission
line are not large.
Monomode fibers limit the propagation to a single mode, thus eliminating the dispersion
caused by multimode propagation. Because in this case the dispersion is due only to the
material and the geometry of the waveguide, monomodal fibers are preferred for applications
that require wide transmission bandwidth and very long transmission lines.
In Table 4.4 typical values of the transmission bandwidth, normalized by the length of the
optical fiber, are given for different types of fibers. The step-index (SI) fiber is characterized
by a constant value of the refraction index, whereas the graded-index (GRIN) fiber has a
refraction index decreasing with the distance from the fiber axis. As noticed previously, the
monomodal fibers are characterized by larger bandwidths; to limit the number of modes
294 Chapter 4. Transmission media
to one, the diameter of the monomodal fiber is related to the wavelength and is normally
about one order of magnitude smaller than that of multimodal fibers.
Semiconductor laser diodes (LD) or light-emitting diodes (LED) are used as signal light
sources in most applications; these sources are usually modulated by electronic devices.
The conversion from a current signal to an electromagnetic field that propagates along the
fiber can be described in terms of light signal power by the relation
PT x D k0 C k1 i (4.171)
where k0 and k1 are constants. The transmitted waveform can therefore be seen as a replica
of the modulation signal, in this case the current signal. Laser diodes are characterized by
a smaller spectral width 1½ as compared to that of LEDs, and therefore lead to a lower
dispersion (see (4.170)).
The more widely used photodetector devices are semiconductor photodiodes, which
convert the optical signal into a current signal according to the relation
i D ² P Rc (4.172)
where i is the device output current, P Rc is the power of the incident optical signal and ²
is the photodetector response. Typical values of ² are of the order of 0.5 mA/mW.
Signal quality is measured by the signal-to-noise ratio expressed as
.gi ² P Rc /2 R L
3D (4.173)
gin 2e R L B.I D C ² P Rc / C 4kTw B
where gi is the photodetector current gain, n is a parameter that indicates the photodetector
excess noise, B is the receiver bandwidth, k is Boltzmann constant, e is the charge of the
electron, Tw is the effective noise temperature in Kelvin, I D is the photodetector dark current,
and R L is the resistance of the load that follows the photodetector. We note that in the
denominator of (4.173) the first term is due to shot noise and the second term to thermal noise.
Very low frequency (VLF) for f 0 < 0:3 MHz. The earth and the ionosphere form a waveg-
uide for the electromagnetic waves. At these frequencies the signals propagate around
the earth.
296 Chapter 4. Transmission media
Medium frequency (MF) for 0:3 < f 0 < 3 MHz. The waves propagate as ground waves
up to a distance of 160 km.
High frequency (HF) for 3 < f 0 < 30 MHz. The waves are reflected by the ionosphere at
an altitude that may vary between 50 and 400 km.
Very high frequency (VHF) for 30 < f 0 < 300 MHz. For f 0 > 30 MHz, the signal propa-
gates through the ionosphere with small attenuation. Therefore these frequencies are adopted
for satellite communications. They are also employed for line-of-sight transmissions, us-
ing high towers where the antennas are positioned to cover a wide area. The limit to the
coverage is set by the earth curvature.
p If h is the height of the tower in meters, the range
covered expressed in km is r D 1:3 h: for example, if h D 100 m, coverage is up to about
r D 13 km. However, ionospheric and tropospheric scattering (at an altitude of 16 km or
less) are present at frequencies in the range 30–60 MHz and 40–300 MHz, respectively,
which cause the signal to propagate over long distances with large attenuations.
Ultra high frequency (UHF) for 300 MHz < f 0 < 3 GHz.
Super high frequency (SHF) for 3 < f 0 < 30 GHz. At frequencies of about 10 GHz,
atmospheric conditions play an important role in signal propagation. We note the following
absorption phenomena, which cause additional signal attenuation:
1. due to oxygen: for f 0 > 30 GHz, with peak attenuation at 60 GHz;
2. due to water vapor: for f 0 > 20 GHz, with peak attenuation at around 20 GHz;
3. due to rain: for f 0 > 10 GHz, assuming the diameter of the rain drops is of the order
of the signal wavelength.
We note that, if the antennas are not positioned high enough above the ground, the electro-
magnetic field propagates not only into the free space but also through ground waves.
Radiation masks
A radio channel by itself does not set constraints on the frequency band that can be used
for transmission. In any case, to prevent interference among radio transmissions, regulatory
bodies specify power radiation masks: a typical example is given in Figure 4.27, where
the plot represents the limit on the power spectrum of the transmitted signal with reference
to the power of a non-modulated carrier. To comply with these limits, a filter is usually
employed at the transmitter front-end.
Figure 4.27. Radiation mask of the GSM system with a bandwidth of 200 kHz around the
carrier.
consists in approximating an electromagnetic wave as a ray (in the optical sense), is often
adequate.
The deterministic model is used to evaluate the power of the received signal when there
are no obstacles between the transmitter and receiver, that is in the presence of line of
sight: in this case we can think of only one wave that propagates from the transmitter to
the receiver. This situation is typical of transmissions between satellites and terrestrial radio
stations in the microwave frequency range (3 < f 0 < 70 GHz).
Let PT x be the power of the signal transmitted by an ideal isotropic antenna, which
uniformly radiates in all directions in the free space. At a distance d from the antenna, the
power density is
PT x
80 D (W/m2 ) (4.174)
4³ d 2
where 4³ d 2 is the surface of a sphere of radius d that is uniformly illuminated by the
antenna. We observe that the power density decreases with the square of the distance. On a
logarithmic scale (dB) this is equivalent to a decrease of 20 dB-per-decade with the distance.
In the case of a directional antenna, the power density is concentrated within a cone and
is given by
GT x PT x
8 D G T x 80 D (4.175)
4³ d 2
where GT x is the transmit antenna gain. Obviously, GT x D 1 for an isotropic antenna;
usually, GT x × 1 for a directional antenna.
298 Chapter 4. Transmission media
Multipath
It is useful to study the propagation of a sinusoidal signal hypothesizing that the one-ray
model is adequate, which implies using a directional antenna. Let sT x be a narrowband
7 .bb/
The constraint that GCh . f / D 0 for f < f 0 was removed because the input already satisfies the condition
.bb/
ST x . f / D 0 for f < f 0 .
4.6. Radio links 301
Therefore signal amplitudes, corresponding to rays that are not the direct or line of sight ray,
undergo an attenuation due to reflections that is added to the attenuation due to distance.
The total phase shift asociated with each ray is obtained by summing the phase shifts
introduced by the various reflections and the phase shift due to the distance traveled. If Nc
is the number of paths and di is the distance traveled by the i-th ray, extending the channel
model (4.186) we get
" #
Nc
A0 X ai .a/
gCh .− / D Re h .− −i / (4.190)
A T x i D1 di
where −i D di =c is the delay of the i-th ray. The complex envelope of the channel impulse
response (4.190) around f 0 is equal to
Nc
.bb/ 2A0 X ai j2³ f 0 −i
gCh .− / D e Ž.− −i / (4.191)
A T x i D1 di
We note that the only difference between the passband model and its baseband equivalent
is constituted by the additional phase term e j2³ f 0 −i for the i-th ray.
Limited to narrowband signals, extending the channel model (4.188) to the case of many
reflections, the received signal can still be written as
XNc
ai j'i
A Rc e j' Rc D A0 e (4.193)
d
i D1 i
with 'i D 2³ f 0 −i . Let Ai and i be amplitude and phase, respectively, of the term
A0 .ai =di /e j'i ; from (4.193) the resulting signal is given by the sum of Ai e j i , i D
1; : : : ; Nc , as represented in Figure 4.29. As P0 D A20 =2, the received power is
ψ3
ARc
ψ2
φR
c
ψ
1
þ þ2
þX Nc
ai j'i þþ
þ
P Rc D P0 þ e þ (4.194)
þ i D1 di þ
and is independent of the total phase of the first ray. We will now give two examples of
application of the previous results.
LOS
h1 h2
longer valid. It is assumed, moreover, that the rays that reach the receive antenna are due,
respectively, to LOS, reflection from the floor, and reflection from the ceiling. As a result
the received power is given by
þ þ2
þX 3
ai e j'i þþ
þ
P Rc D P0 þ þ (4.198)
þ i D1 di þ
where the reflection coefficients are a1 D 1 for the LOS path, and a2 D a3 D 0:7.
With these assumptions, one finds that the power decreases with the distance in an erratic
way, in the sense that by varying the position of the antennas the received power presents
fluctuations of about 20ł30 dB. In fact, depending on the position, the phases of the various
rays change and the sum in (4.193) also varies: in some positions all rays are aligned in
phase and the received power is high, whereas in others the rays cancel each other and the
received power is low. In the previous example this phenomenon is not observed because
the distance d is much larger than the antenna heights, and the phase difference between
the two rays remains always small.
Tx
θ ∆l
Rc
P Q
This implies that if a narrowband signal given by (4.184) is transmitted, the received
signal is
s Rc .t/ D Re[A Rc e j2³. f 0 f s /t ] (4.201)
The (4.200) relates the Doppler shift to the speed of the receiver and the angle ; in
particular, for D 0 we get
f s D 9:259 104 v p jkm=h f 0 jMHz (Hz) (4.202)
þ
where v p þkm=h is the speed of the mobile in km/h, and f 0 jMHz is the carrier frequency in
MHz. For example, if v p D 100 km/h and f 0 D 900 MHz we have f s D 83 Hz. We note
that if the receiver moves towards the transmitter the Doppler shift is positive, if it moves
away from the transmitter the Doppler shift is negative.
We now consider a narrowband signal transmitted in an indoor environment8 where the
signal received by the antenna is given by the contribution of many rays, each with a dif-
ferent length. If the signal propagation were taking place through only one ray, the received
signal would undergo only one Doppler shift. But according to (4.200) the frequency shift
f s depends on the angle . Therefore, because of the different paths, the received signal is
no longer monochromatic, and we speak of a Doppler spectrum to indicate the spectrum
of the received signal around f 0 . This phenomenon manifests itself also if both the trans-
mitter and the receiver are static, but a person or an object moves modifying the signal
propagation.
The Doppler spectrum is characterized by the Doppler spread, which measures the dis-
persion in the frequency domain that is experienced by a transmitted sinusoidal signal. It
is intuitive that the more the characteristics of the radio channel vary with time, the larger
the Doppler spread will be. An important consequence of this observation is that the con-
vergence time of algorithms used in receivers, e.g., to perform adaptive equalization, must
be much smaller than the inverse of the Doppler spread of the channel, thus enabling the
adaptive algorithms to follow the channel variations.
8 The term indoor is usually referred to areas inside buildings, possibly separated by walls of various thick-
ness, material, and height. The term outdoor, instead, is usually referred to areas outside of buildings: these
environments can be of various types, for example, urban, suburban, rural, etc.
4.6. Radio links 305
Figure 4.32. Physical representation and model of a two-ray radio channel, where g1 and g2
are assumed to be positive.
It is evident that the channel has a selective frequency behavior, as the attenuation depends
on frequency. For g1 and g2 real-valued, from (4.209) the following frequency response is
obtained
þ þ2
þ .bb/ þ
þGCh .t; f /þ D g12 .t/ C g22 .t/ C 2g1 .t/g2 .t/ cos.2³ f −Q2 .t// (4.210)
shown in Figure 4.32. In any case, the signal distortion depends on the signal bandwidth
in comparison to 1=−Q2 .
Going back to the general case, for wideband communications, rays with different delays
are assumed to be independent, that is they do not interact with each other. In this case
from (4.206) the received power is
Nc
X
P Rc D PT x jgi j2 (4.211)
i D1
From (4.211) we note that the received power is given by the sum of the squared amplitude
of all the rays. Conversely, in the transmission of narrowband signals the received power
is the square of the vector amplitude resulting from the vector sum of all the received
rays. Therefore, for a given transmitted power, the received power will be lower for a
narrowband signal as compared to a wideband signal.
4.6. Radio links 307
where
Nc
X
jgi j2 −in
i D1
−n D n D 1; 2 (4.213)
XNc
2
jgi j
i D1
The above formulae give the rms delay spread for an instantaneous channel impulse re-
sponse. With reference to the time-varying characteristics of the channels, we use the
(average) rms delay spread − r ms obtained by substituting in (4.213) in place of jgi j2 its
expectation. In this case − r ms measures the mean time dispersion that a signal undergoes
because of multipath.
Typical values of (average) rms delay spread are of the order of µs in outdoor mobile
radio channels, and of the order of some tenths of ns in indoor channels.
We define as power delay profile, also called delay power spectrum or multipath intensity
profile, the expectation of the squared amplitude of the channel impulse response, E[jgi j2 ],
as a function of delay −i . In Table 4.5 power delay profiles are given for some typical
channels.
g1 D C C gQ 1 i D1
(4.214)
gi D gQi i D 2; : : : ; Nc
where C is a real-valued constant and gQi is a complex-valued random variable with zero
mean and Gaussian distribution (see Example 1.9.3 on page 67). In other words, whereas
the first ray contains a direct (deterministic) component in addition to a random component,
all the other rays are assumed to have only a random component: therefore the distribution
308 Chapter 4. Transmission media
Table 4.5 Values of E[jgi j2 ] (in dB) and −i (in ns) for three
typical channels.
p
pjgN 1 j .a/ D 2.1 C K / a exp[K .1 C K /a 2 ]I0 [2a K .1 C K /]1.a/
2
(4.215)
pjgNi j .a/ D 2a ea 1.a/
where I0 is the modified Bessel function of the first type and order zero,
Z ³
1
I0 .x/ D e x cos Þ dÞ (4.216)
2³ ³
The probability density (4.215) is given in Figure 4.33 for various values of K .
In (4.214) the phase of gQi is uniformly distributed in [0; 2³ /. For a one-ray channel
model, the parameter K D C 2 =E[jgQ 1 j2 ], known as the Rice factor, is equal to the ratio
between the power of the direct component and the power of the reflected and/or scattered
component. In general for a model with more rays we take K D C 2 =Md , where Md is the
P c
statistical power of all reflected and/or scattered components, that is Md D iND1 E[jgQ i j2 ].
Assuming that the power delay profile is normalized such that
Nc
X
E[jgi j2 ] D 1 (4.217)
i D1
p
we obtain C D K =.K C 1/. Typical reference values for K are 3 and 10 dB. If C D 0,
i.e. no direct component exists, it is K D 0, and the Rayleigh distribution is obtained for
all the gains fgi g. For K ! 1, i.e. with no reflected and/or scattered components and,
hence, C D 1, we find the model having only the deterministic component.
To justify the Rice model for jg1 j we consider the transmission of a sinusoidal signal
(4.184). In this case the expression of the received signal is given by (4.192), which we
rewrite as follows:
2
K=10
1.8
1.6
K=5
1.4
1.2
(a)
K=2
1
|g |
1
p
K=0
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
a
Figure 4.33. The Rice probability density function for various values of K. The Rayleigh
density function is obtained for K D 0.
where C represents the contribution of the possible direct component of the propagation
signal, and gQ 1;I and gQ 1;Q are due to the scattered component. As the gains gQ1;I and gQ 1;Q
are given by the sum of a large number of random components, they can be approximated
by independent Gaussian random processes with zero mean. The instantaneous envelope of
the received signal is then given by
q
[gQ 1;I .t/ C C]2 C gQ 1;Q
2 .t/ (4.219)
which, in the assumption just formulated, is a Rice random variable for each instant t.
According to the model known as the wide-sense stationary uncorrelated scattering (WSSUS),
the values of g for rays that arrive with different delays are uncorrelated, and g is stationary
in t. Therefore we have
In other words, the autocorrelation is non-zero only for impulse responses that are considered
for the same delay time. Moreover, as g is stationary in t, if the delay time is the same, the
autocorrelation only depends on the difference of the times at which the two impulse responses
are evaluated.
2. Gaussian, unilateral
r
2 1 − 2 =.2− r2ms /
M.− / D e − ½0 (4.223)
³ − r ms
3. Exponential, unilateral
1 −=.− r ms /
M.− / D e − ½0 (4.224)
− r ms
The measure of the set of values − for which M.− / is above a certain threshold is called
(average) excess delay spread of the channel. As in the case of the discrete channel model
previously studied, we define the (average) rms delay spread as
Z 1
.− − /2 M.− / d−
− r2ms D 1Z 1 (4.225)
M.− / d−
1
where
Z 1
−D − M.− / d− (4.226)
1
The inverse of the (average) rms delay spread is called the coherence bandwidth of the
channel.
4.6. Radio links 311
For digital transmission over such channels, we observe that if − r ms is of the order of
20% of the symbol period, or larger then signal distortion is non-negligible. Equivalently,
if the coherence bandwidth of the channel is lower than 5 times the modulation rate of
the transmission system, then we speak of a frequency selective fading channel, otherwise
the channel is flat fading. However, in the presence of flat fading the received signal
may vanish completely, whereas frequency selective fading produces several replicas of
the transmitted signal at the receiver, so that a suitably designed receiver can recover the
transmitted information.
and
Doppler spectrum
We now analyze the WSSUS channel model with reference to time variations. First we
introduce the correlation function of the channel frequency response taken at instants t and
t 1t, and, respectively, at frequencies f and f 1 f ,
0
M( τ ) (dB)
-10
-20
-30
τ ( µs)
0 1 2 5
d.0/ D 1 (4.237)
10 In very general terms, we could have a different Doppler spectrum for each path, or gain g.t; − /, of the channel.
4.6. Radio links 313
Therefore, D.½/ can also be obtained as the Fourier transform of rG .1t;0/, which in turn
can be determined by transmitting a sinusoidal signal (hence 1 f D 0) and estimating the
autocorrelation function of the amplitude of the received signal.
The maximum frequency f d of the Doppler spectrum support is called the Doppler
spread of the channel and gives a measure of the fading rate of the channel. Another
measure of the support of D.½/ can be obtained through the rms Doppler spread or second
order central moment of the Doppler spectrum. The inverse of the Doppler spread is called
coherence time: it gives a measure of the time interval within which a channel can be
assumed to be time invariant or static. Let T be the symbol period in a digital transmission
system; we usually say that the channel is fast fading if f d T > 102 , and slow fading if
f d T < 103 .
Shadowing
The simplest relation between average transmitted power and average received power is
P0
P Rc D (4.244)
dÞ
314 Chapter 4. Transmission media
where Þ is equal to 2 for propagation in free space and to 4 for the simple 2-ray model
described before. For indoor and urban outdoor radio channels the relation depends on the
environment, according to the number of buildings, their dimensions, and also the material
used for their construction; in general, however, variations of the average received power
are lower in outdoor environments than in indoor environments.
Shadowing takes into account the fact that the average received power may present fluc-
tuations around the value obtained by deterministic models. These fluctuations are modelled
as a log-normal random variable, that is e¾ , where ¾ is a Gaussian random variable with
zero mean and variance ¦¾2 . If P Rc is the average received power obtained by deterministic
rules, in the presence of shadowing it becomes e¾ P Rc ; in practice shadowing provides a
measure of the adequacy of the adopted deterministic model.
A propagation model that completely ignores any information on land configuration, and
therefore is based only on the distance between transmitter and receiver, has a shadowing
with ¦.¾ /d B D 12 dB. The relation between ¦¾ and ¦.¾ /d B is ¦¾ D 0:23¦.¾ /d B . Improving
the accuracy of the propagation model, for example, by using more details regarding the
environmental configuration, the shadowing can be reduced; in case we had an enormous
amount of topographic data and the means to elaborate them, we would have a model with
¦¾ D 0. Hence, shadowing should be considered in the performance evaluation of mobile
radio systems, whereas for the correct design of a network it is good practice to make use
of the largest possible quantity of topographic data.
Final remarks
A signal that propagates in a radio channel for mobile communications undergoes a type
of fading that depends on the signal as well as on the channel characteristics. In particular,
whereas the delay spread due to multipath leads to dispersion in the time domain and
therefore frequency selective fading, the Doppler spread causes dispersion in the domain
of the variable ½ and therefore time selective fading.
The first type of fading can be divided into flat fading and frequency selective fading. In
the first case the channel has a constant gain; in other words, the inverse of the transmitted
signal bandwidth is much larger than the delay spread of the channel and g.t; − / can be
approximated by a delta function, with random amplitude and phase, centered at − D 0.
In the second case instead the channel has a time-varying frequency response within the
passband of the transmitted signal and consequently the signal undergoes frequency selective
fading; these conditions occur when the inverse of the transmitted signal bandwidth is of
the same order or smaller than the delay spread of the channel. The received signal consists
of several attenuated and delayed versions of the transmitted signal.
A channel can be fast fading or slow fading. In a fast fading channel, the impulse response
of the channel changes within a symbol period, that is the coherence time of the channel is
smaller than the symbol period; this condition leads to signal distortion, which increases with
increasing Doppler spread. Usually there are no remedies to compensate for such distortion
unless the symbol period is decreased; on the other hand, this choice leads to larger intersym-
bol interference. In a slow fading channel, the impulse response changes much more slowly
with respect to the symbol period. In general, the channel can be assumed as time invariant
for a time interval that is proportional to the inverse of the Doppler spread.
4.6. Radio links 315
x(kTQ)
TQ TQ TQ
y(kTQ)
If the channel model includes a deterministic component for the ray with delay −i D i TQ ,
a constant Ci must be added to the random component gNi . Furthermore, if the channel model
includes a Doppler shift f si for the i-th branch, then we need to multiply the term Ci C gNi
by the exponential function exp. j2³ f si kTQ /. Observing (4.211), to avoid modifying the
average transmitted power, the coefficients fgi g, i D 0; 1; : : : ; Nc 1, are scaled, so that
NX
c 1
E[jgi .kTQ /j2 ] D 1 (4.245)
i D0
For example, the above condition is satisfied if each signal gi0 has unit statistical power11
and f¦i g satisfy the condition
NX
c 1
.¦i2 C Ci2 / D 1 (4.246)
i D0
1 C !04
a2 D p (4.249)
.1 C !02 C 2 !0 /2
c0 D 1
4 .1 C a1 C a2 / (4.250)
fd
1 fm D (4.253)
Nf
Z fd
or, defining K d D D1=3 . f /d f , as
0
Kd
1 fm D m D 1; : : : ; N f (4.254)
N f D1=3 . f m /
fbn g; n D 0; : : : ; 21 :
1:3651 e 4 8:1905 e 4 2:0476 e 3 2:7302 e 3
2:0476 e 3 9:0939 e 4 6:7852 e 4 1:3550 e 3
1:8067 e 3 1:3550 e 3 5:3726 e 4 6:1818 e 5
7:1294 e 5 9:5058 e 5 7:1294 e 5 2:5505 e 5
1:3321 e 5 4:5186 e 5 6:0248 e 5 4:5186 e 5
1:8074 e 5 3:0124 e 6
318 Chapter 4. Transmission media
(bb)
Figure 4.37. Nine realizations of jgCh .t; − /j for a Rayleigh channel with an exponential power
delay profile having − rms D 0:5 T.
The phases 'i;m , 8i;I and 8i;Q are uniformly distributed in [0; 2³ / and statistically
independent. This choice for 8i;I and 8i;Q ensures that the real and imaginary parts of gi0
are statistically independent. p
The amplitude is given by Ai;m D D. f m /1 f m . If D. f / is flat, by the central limit
theorem we can claim that gi0 is a Gaussian process; if instead D. f / presents some fre-
quencies with large amplitude, Ai;m must be generated as a Gaussian random variable with
zero mean and variance D. f m /1 f m .
In Figure 4.37 are represented nine realizations of the amplitude of the impulse response
of a Rayleigh channel obtained by the simulation model of Figure 4.35, for an exponential
power delay profile with − r ms D 0:5 T . The Doppler frequency f d was assumed to be zero.
We point out that the parameter − r ms provides scarce information on the actual behavior
.bb/
of gCh , which can scatter for a duration equal to 4-5 times − r ms .
4.7.1 Characteristics
Telephone channels, originally conceived for the transmission of voice, today are exten-
sively used also for the transmission of data. Transmission of a signal over a telephone
4.7. Telephone channel 319
Linear distortion
The frequency response GCh . f / of a telephone channel can be approximated by a passband
filter with band in the range of frequencies from 300 to 3400 Hz.
The plots of the attenuation
a. f / D 20 log10 jGCh . f /j (4.256)
Noise sources
Impulse noise. It is caused by electromechanical switching devices and is measured by
the number of times the noise level exceeds a certain threshold per unit of time.
Non-linear distortion
It is caused by amplifiers and by non-linear A-law and ¼-law converters (see Chapter 5).
Frequency offset
It is caused by the use of carriers for frequency up and downconversion. The relation
between the channel input x.t/ and output y.t/ is given by
(
X . f f off / f >0
Y. f/ D (4.258)
X . f C f off / f <0
Usually f off 5 Hz.
320 Chapter 4. Transmission media
Figure 4.38. Attenuation and envelope delay distortion for two typical telephone channels.
4.7. Telephone channel 321
Figure 4.39. Signal to quantization noise ratio as a function of the input signal power for
three different inputs.
Figure 4.40. Three of the many signal paths in a simplified telephone channel with a single
two-to-four wire conversion at each end.
Phase jitter
It is a generalization of the frequency offset (see (4.270)).
Echo
As discussed in Section 3.6.5, it is caused by the mismatched impedances of the hybrid.
As illustrated in Figure 4.40, there are two types of echoes:
1. Talker echo: part of the signal is reflected and input to the receiver at the transmit
side. If the echo is not very delayed, then it is practically indistinguishable from the
original voice signal;
322 Chapter 4. Transmission media
2. Listener echo: if the echo is reflected a second time, it returns to the listener and
disturbs the original signal.
On terrestrial channels the round-trip delay of echoes is of the order of 10ł60 ms, whereas
on satellite links it may be as large as 600 ms. We note that the effect of echo is similar to
multipath fading in radio systems. To mitigate the effect of echo there are two strategies:
ž use echo suppressors that attenuate the unused connection of a four-wire transmis-
sion line;
ž use echo cancellers that cancel the echo at the source, as illustrated in the scheme of
Figure 3.36.
where A.t/ ½ 0 is the signal envelope, and '.t/ is the instantaneous phase deviation.
The envelope and the phase of the output signal, sT x .t/, depend on the instantaneous,
i.e. without memory, transformations of the input:
and
sT.bb/
x .t/ D G[A.t/]e
j .'.t/C8[A.t/]/
(4.262)
The functions G[A] and 8[A], called envelope transfer functions, represent respectively the
amplitude/amplitude (AM/AM) conversion and the amplitude/phase (AM/PM) conversion
of the amplifier.
In practice, the HPA are of two types. For each type we give the AM/AM and AM/PM
functions commonly adopted for the analysis. First, however, we need to introduce some
normalizations. As a rule, the point at which the amplifier operates is identified by the
back-off. We adopt here the following definitions for the input back-off (IBO) and the
output back-off (OBO):
S
IBO D 20 log10 p (dB) (4.263)
Ms
ST x
OBO D 20 log10 p (dB) (4.264)
Ms T x
where Ms is the statistical power of the input signal s, MsT x is the statistical power of
the output signal sT x , and S and ST x are the amplitudes of the input and output signals,
respectively, that lead to saturation of the amplifier. Here we assume S D 1 and ST x D G[1]
for all the amplifiers considered.
TWT. The travelling wave tube (TWT) is a device characterized by a strong AM/PM
conversion. The conversion functions are
ÞA A
G[A] D (4.265)
1 C þ A A2
Þ8 A 2
8[A] D (4.266)
1 C þ8 A 2
where Þ A , þ A , Þ8 and þ8 suitable parameters.
The (4.265) and (4.266) are illustrated in Figure 4.42 for Þ A D 1, þ A D 0:25, Þ8 D 0:26
and þ8 D 0:25.
SSPA. The solid state power amplifier (SSPA) has a more linear behavior in the region
of small signals as compared to the TWT. The AM/PM conversion is usually negligible.
324 Chapter 4. Transmission media
0
G[A] (dB)
−5
−10
−15
−14 −12 −10 −8 −6 −4 −2 0 2
A (dB)
(a)
25
20
15
Φ[A] (deg.)
10
0
14 12 10 8 6 4 2 0 2
A (dB)
(b)
Figure 4.42. AM/AM and AM/PM characteristics of a TWT for ÞA D 1, þA D 0:25, Þ8 D 0:26
and þ8 D 0:25.
0
p=3
p=2
p=1
G[A] (dB)
10
15
14 12 10 8 6 4 2 0 2
A (dB)
5
HPA 38 GHz
HPA 40 GHz
0
G[A] (dB)
−5
−10
−15
−14 −12 −10 −8 −6 −4 −2 0 2
A (dB)
Figure 4.44. AM/AM experimental characteristic of two amplifiers operating at 38 GHz and
40 GHz.
326 Chapter 4. Transmission media
It is interesting to compare the above analytical models with the behavior of a practical
HPA. Figure 4.44 illustrates the AM/AM characteristics of two waveguide HPA operating
at frequencies of 38 GHz and 40 GHz.
Transmission medium
The transmission medium is typically modelled as a filter. For transmission lines and radio
links the models are given respectively in Sections 4.4 and 4.6.
Additive noise
Several noise sources that cause a degradation of the received signal may be present in a
transmission system. Consider, for example, the noise introduced by a receive antenna or
the thermal noise and shot noise generated by the pre-amplifier stage of a receiver. At the
receiver input, all these noise signals are modelled as an effective additive white Gaussian
noise (AWGN) signal, statistically independent of the desired signal. The power spectral
density of the AWGN noise can be obtained by the analysis of the system devices, or by
experimental measurements.
Phase noise
The demodulators used at the receivers are classified as “coherent” or “non-coherent”,
depending on whether they use or not a carrier signal, which ideally should have the same
phase and frequency as the carrier at the transmitter, to demodulate the received signal.
Typically both phase and frequency are recovered from the received signal by a phase
locked loop (PLL) system, which employs a local oscillator. The recovered carrier may
differ from the transmitted carrier because of the phase noise,12 due to short-term stability,
i.e. frequency drift, of the oscillator, and because of the dynamics and transient behavior
of the PLL.
The recovered carrier is expressed as
dt 2
v.t/ D Vo [1 C a.t/] cos !0 t C ' j .t/ C (4.270)
2
where d (long-term drift) represents the effect due to ageing of the oscillator, a.t/ is the
amplitude noise, and ' j .t/ denotes the phase noise. Often the amplitude noise a.t/, as
well as the effect of ageing, can be neglected. The phase noise is usually represented in a
transmission system model as in Figure 4.41.
The phase noise ' j .t/ consist of deterministic components and random noise. For
example, temperature change, supply voltage, and the output impedance of the oscillator
are included among deterministic components.
Ignoring the deterministic effects, with the exception of the frequency drift, a PSD model
of the of ' j .t/ comprises five terms:
f2 f2 f2 f2
P' j . f / D k4 04 C k3 03 C k2 02 C k1 0 C k0 f 2 (4.271)
f f f f | {z0}
| {z } | {z } | {z } | {z } white
random flicker random phase flicker phase noise
frequency frequency walk or white phase noise
walk noise frequency noise
for f ` f f h .
A simplified model, often used, is given by
8
>
<a j f j f1
P' j . f / D c C 1 (4.272)
>
:b 2 f1 j f j < f2
f
where the parameters a and c are typically of the order of 65 dBc/Hz and 125 dBc/Hz,
respectively, and b is a scaling factor that depends on f 1 and f 2 and assures continuity
of the PSD. dBc means dB carrier, that is it represents the statistical power of the phase
noise, expressed in dB, with respect to the statistical power of the desired signal received
in the passband.
Depending on the values of a, b, c, f 1 and f 2 , typical values of the statistical power of
' j .t/ are in the range from 102 to 104 . The plot of (4.272) is shown in Figure 4.45 for
f 1 D 0:1 MHz, f 2 D 2 MHz, a D 65 dBc/Hz, and c D 125 dBc/Hz.
−60
−70
−80
Pφ(f) (dBc/Hz)
−90
−100
−110
(~
− −20 dB/decade)
−120
−130
0 5 10 15
f (MHz)
Bibliography
Chapter 5
Figure 5.1a illustrates the conventional transmission of an analog signal, for example,
speech or video, over an analog channel; in this scheme the transmitter usually consists
of an amplifier and possibly a modulator, the analog transmission channel is of the type
discussed in Chapter 4, and the receiver consists of an amplifier and possibly a demodulator.
Alternatively, the transmission may take place by first encoding1 the information contained
in the analog signal into a sequence of bits using for example an analog-to-digital converter
(ADC), as illustrated in Figure 5.1b.
If Tb is the time interval between two consecutive bits of the sequence, the bit rate of
the ADC is Rb D 1=Tb (bit/s). The binary message is converted by a digital modulator into
a waveform that is suitable for transmission over an analog channel. At the receiver, the
reverse process occurs: in this case a digital demodulator restores the message, whereas the
conversion of the sequence of bits to an analog signal is performed by a digital-to-analog
converter (DAC). The system that has as an input the sequence of bits produced by the
ADC, and as an output the sequence of bits produced by the digital demodulator is called
a binary channel (see Chapter 7).
In this chapter the principles and methods for the conversion of analog signals into binary
messages and viceversa will be discussed; as a practical example we will use speech, but
the principles may be extended to any analog signal. To compute system performance, a
fundamental parameter is the signal-to-noise ratio. Let s.t/ be the original signal, sQ .t/ the
reconstructed signal, and eq .t/ D sQ .t/ s.t/; then the signal-to-noise ratio is defined as
E[s 2 .t/]
3q D (5.1)
E[eq2 .t/]
1 We bring to the attention of the reader that the terms “encoder” and “decoder” are commonly used to indicate
various devices in a communication system. In this chapter we will deal with encoders and decoders for the
digital representation of analog waveforms.
332 Chapter 5. Digital representation of waveforms
an analog passband signal that can be transmitted over the telephone channel. In Figure 5.2,
the source generates a speech signal or a data file; in the latter case, a modem is required to
transmit the signal. The analog signal s.t/ that has a band of approximately 300–3400 Hz
is sent over a local loop to the central office (see Chapter 4): here it is usually converted
into a binary digital message via PCM at 64 kbit/s; in turn this message is modulated before
being transmitted over an analog channel. After having crossed several central offices where
switching (routing) of the signal takes place, the PCM encoded message arrives at the desti-
nation central office: here it is converted into an analog signal and sent over a local loop to
the end user. It is here that the signal must be identified as a speech signal or a digitally mod-
ulated signal; in the latter case a modem will demodulate it to reproduce the data message.
Figure 5.3 illustrates the concept of direct digital access at the user’s premises. An analog
signal is converted into a digital message via an ADC. The user digital message is then sent
over the analog channel by a modulator. At the receiver the inverse process is established,
where the digital message obtained at the output of the demodulator may be used to restore
an analog signal via a DAC.
In comparing the two systems, we note the waste of capacity of the system in Figure 5.2.
For example, for a 9600 bit/s modem, the modulated PCM encoded signal requires a stan-
dard capacity of Rb D 64 kbit/s. By directly accessing the PCM link at the user’s home,
we could transmit 64000=9600 ' 6 data signals at 9600 bit/s.
is strongly correlated and almost periodic, with a period that is called pitch, and exhibits
large amplitudes; conversely in an unvoiced speech spurt the signal is weakly correlated
and has small amplitudes. We note moreover that the average level of speech changes in
time: indeed speech is a non-stationary signal. In Figure 5.6 it is interesting to observe the
instantaneous spectrum of some voiced and unvoiced sounds; we also note that the latter
may have a bandwidth larger than 10 kHz.
Concerning the amplitude distribution of speech signals, we observe that over short time
intervals, of the order of a few tenths of milliseconds (or of a few hundreds of samples at a
sampling frequency of 8 kHz), the amplitude statistic is Gaussian with good approximation;
over long time intervals, because of the numerous pauses in speech, it tends to exhibit a gamma
or Laplacian distribution. We give here the probability density functions of the amplitude that
are usually adopted. Let ¦s be the standard deviation of the signal s.t/; then we have
p !1 p
2 3jaj
3 2¦
gamma: ps .a/ D e s
8³ ¦s jaj
p
1 2jaj
Laplacian: ps .a/ D p e
¦
s
(5.2)
2¦s
1 1 a 2
2 ¦
Gaussian: ps .a/ D p e s
2³ ¦s
As mentioned above, analog modulated signals generated by modems, often called voice-
band data signals, are also transmitted over telephone channels. Figure 5.7 illustrates a
Figure 5.6. Spectrum of voiced and unvoiced sounds for a sampling frequency of 20 kHz.
336 Chapter 5. Digital representation of waveforms
1
s(t)
−1
0 0.06
t (s)
Figure 5.7. Signal generated by a modem employing FSK modulation for the transmission of
1200 bit/s.
Figure 5.8. Signal generated by modems employing PSK modulation for the transmission of:
(a) 2400 bit/s; (b) 4800 bit/s.
signal produced by the 202S modem, which employs FSK modulation for the transmis-
sion of 1200 bit/s, whereas Figure 5.8a and Figure 5.8b illustrate signals generated by the
201C and 208B modems, which employ PSK modulation for the transmission of 2400 and
4800 bit/s, respectively. For the definition of FSK and PSK modulation we refer the reader
to Chapter 6.
In general, we note that the average level of signals generated by modems is stationary;
moreover, if the bit rate is low, signals are strongly correlated.
5.1. Analog and digital access 337
Speech coding
Speech coding addresses person-to-person communications and is strictly related to the
transmission, for example, over the public network, and storage of speech signals. The aim
is to represent, using an encoder, speech as a digital signal that requires the lowest possible
bit rate to recreate, by an appropriate decoder, the speech signal at the receiver [1].
Depicted in Figure 5.9 is a basic scheme, denoted as ADC, that provides the analog-to-
digital conversion (encoding) of the signal, consisting of:
2. a quantizer;
As indicated by the sampling theorem, the choice of the sampling frequency Fc D 1=Tc
is related to the bandwidth of the signal s.t/ (see (1.142)). In practice, there is a trade-off
between the complexity of the anti-aliasing filter and the choice of the sampling frequency,
which must be greater than twice the signal bandwidth. For audio signals, Fc depends on
the signal quality that we wish to maintain and therefore it depends on the application, see
Table 5.1 [2].
Figure 5.9. Basic scheme for the digital transmission of an analog signal.
The choice of the quantizer parameters is somehow more complicated and will be dealt
with in detail in the following sections. We will consider now the quantizer as an instanta-
neous non-linear transformation that maps the real values of s in a finite number of values
of sq . To illustrate the principle of an ADC, let us assume that sq assumes values that are
taken from a set of 8 elements:2
Q[s.kTc /] D sq .kTc / 2 fQ4 ; Q3 ; Q2 ; Q1 ; Q1 ; Q2 ; Q3 ; Q4 g (5.3)
Therefore sq .kTc / may assume only a finite number of values, which can be represented
as binary values, for example, using the inverse bit mapper of Table 5.2.
It is convenient to consider the sequence of bits that gives the binary representation of
fsq g instead of the sequence of values itself. In our example, with a representation using
three bits per sample, the bit rate of the system is equal to
Rb D 3Fc (bit/s) (5.4)
The inverse process (decoding) takes place at the receiver: the bit-mapper (BMAP) restores
the quantized levels, and an interpolator filter yields an estimate of the analog signal.
Q4 0 000
Q3 1 001
Q2 2 010
Q1 3 011
Q1 4 100
Q2 5 101
Q3 6 110
Q4 7 111
2 The notation adopted in (5.3) to define the set reflects the fact that in most cases the set of values assumed by
sq is symmetrical around the origin.
3 From the Observation 1.7 on page 71, if s .kT / is WSS, then the interpolated random process s .t/ is WSS
q c q
and E[sq2 .t/] D E[sq2 .kTc /], whenever the gain of the interpolator filter is equal to one. As a result the signal-
to-noise ratio in (5.1) becomes independent of t and can be computed using the samples of the processes,
3q D E[s 2 .kTc /]=E[eq2 .kTc /].
5.1. Analog and digital access 339
g
I
nTc t
Tc =1/Fc
Typically, however, the DAC employs a simple holder that holds the input values as
illustrated in Figure 5.10. In this case
t Tc =2
g I .t/ D rect D wTc .t/ (5.6)
Tc
and
f
G I . f / D Tc sinc e2³ f Tc =2 (5.7)
Fc
Unless the sampling frequency has been chosen sufficiently higher than twice the band-
width of s.t/, we see that the filter (5.7), besides not attenuating enough the images of
sQq .kTc /, introduces distortion in the passband of the desired signal.4 A solution to this
problem consists in introducing, before interpolation, a digital equalizer filter with a fre-
quency response equal to 1= sinc. f Tc / in the passband of s.t/. Figure 5.11 illustrates the
solution.
A simple digital equalizer filter is given by
1 9 1
G comp .z/ D C z 1 z 2 (5.8)
16 8 16
whose frequency response is given in Figure 5.12.
g
I
~s ( kT ) ~
sq(t)
q c g comp wT
Tc Tc c
4 In many applications, to simplify the analog interpolator filter, the signal before interpolation is oversampled:
for example, by digital interpolation of the signal sQq .kTc / by at least a factor of 4.
340 Chapter 5. Digital representation of waveforms
Figure 5.12. Frequency responses of a three-coefficient equalizer filter gcomp and of the
overall filter gI D gcomp Ł wTc .
If an error occurs, the reconstructed binary representation c.k/ Q is different from c.k/:
consequently the reconstructed level is sQq .kTc / 6D sq .kTc /. In the case of a speech signal,
5.1. Analog and digital access 341
Figure 5.13. Frequency responses of an IIR equalizer filter gcomp and of the overall filter
gI D gcomp Ł wTc .
such an event is perceived by the ear as a fastidious impulse disturbance. For speech signals
to have an acceptable quality at the receiver it must be Pbit 103 .
-
-
Various coding techniques are listed in Table 5.3. Table 5.4 illustrates the characteristics
of a few systems, putting into evidence that for more sophisticated encoders the implemen-
tation complexity expressed in millions of instructions per second (MIPS), as well as the
delay introduced by the encoder (latency), can be considerable.
The various coding techniques are different in quality and cost of implementation. With
respect to the perceived quality, on a scale from poor to excellent, three categories of en-
coders perform as illustrated in Figure 5.16: obviously a higher implementation complexity
is expected for encoders with low bit rate and good quality. We go from a bit rate in the
range from 4.4 to 9.6 kbit/s for cellular radio systems, to a bit rate in the range from 16 to
64 kbit/s for transmission over the public network.
Generally a coding technique is strictly related to the application and depends on various
factors:
ž signal type (for example speech, music, voice-band data, signalling, etc.);
ž maximum tolerable latency;
ž implementation complexity.
In particular, speech encoder applications for bit rate in the range 4–16 kbit/s are:
ž long distance and satellite transmission;
ž digital mobile radio (cellular radio);
344 Chapter 5. Digital representation of waveforms
PCM 64 0.0 0
ADPCM 32 0.1 0
ASBC 16 1 25
MELP 8 10 35
CELP 4 100 35
LPC 2 1 35
Figure 5.16. Audio quality vs. bit rate for three categories of encoders.
Figure 5.17. Quantization and mapping scheme: (a) encoder, (b) decoder.
ž code word c.k/ 2 f0; 1; : : : ; L 1g, which represents the value of sq .k/. The system
with input s.k/ and output c.k/ constitutes a PCM encoder.
The quantizer can be described by the function
Q : < ! Aq (5.12)
For a given partition of the real axis in the intervals fRi g, i D L=2; : : : ; 1; 1; : : : ; L=2
S L=2 T
such that < D i DL=2; i 6D0 Ri , Ri R j D ; for i 6D j, (5.12) implies the following rule
Observation 5.1
In this chapter the notation c.k/ is used to indicate both an integer number and its vectorial
binary representation (see (5.18)). Furthermore, in the context of vector quantization the
elements of the set Aq are called code words.
346 Chapter 5. Digital representation of waveforms
where 1 is the quantization step size. Two types of characteristics are distinguished, mid-
tread and mid-riser, depending on whether the zero output level belongs or not to Aq .
7∆ sq=Q[s] 011
2
5∆ 010
2
3∆ 001
2
∆ 000
2
- 4∆ - 3∆ - 2∆ - ∆ ∆ 2∆ 3∆ 4∆ s
- ∆
100
2
101 3∆
-
2
110 5∆
-
2
111 7∆
-
2
Quantization error
We will refer to symmetrical quantizers, with mid-riser characteristic. An example with
L D 23 D 8 levels is given in Figure 5.21: in this case the decision thresholds are −i D i1,
i D L=2 C 1; : : : ; 1; 0; 1; : : : ; L=2 1, with, as usual, −L=2 D 1 and − L=2 D 1.
The output values are given by
8
> 1
>
< iC 1 i D L=2; : : : ; 1
2
Qi D (5.20)
>
> 1
: i 1 i D 1; : : : ; L=2
2
Correspondingly the decision intervals are given by (5.14).
348 Chapter 5. Digital representation of waveforms
sq=Q[s]
011
3∆
010
2∆
001
∆
000
7∆ -
5∆ - 3∆ - ∆ ∆ 3∆ 5∆ 7∆ s
2 2 2 2 2 2 2 2
111
- ∆
110
- 2∆
101
- 3∆
100
- 4∆
We note that if sq .k/ D Qi , then the b 1 least significant bits of c.k/ are given by
the binary representation of .jij 1/, and c.k/ assumes amplitude values that go from 0 to
L=2 1 D 2b1 1.
If for each value of s we compute the corresponding error eq D Q.s/ s, we obtain the
quantization error characteristic of Figure 5.21. We define the quantizer saturation value as
that is shifted by 1 with respect to the last finite threshold value. Then we have
1
jeq j for jsj < −sat (5.22)
2
and
(
Q L=2 s for s > −sat
eq D (5.23)
QL=2 s for s < −sat
Consequently, eq may assume large values if jsj > −sat . This observation suggests that the
real axis be divided into two parts:
1. the region s 2 .1; −sat / [ .−sat ; C1/, where eq is called saturation or overload
error .esat /;
5.2. Instantaneous quantization 349
2. the region s 2 [−sat ; −sat ], where eq is called granular error .egr /; the interval
[−sat ; −sat ] is also called quantizer range.
It is often useful to compactly represent the quantizer characteristic in a single axis, as
illustrated in Figure 5.21c, where the values of the decision thresholds are indicated by
dashed lines, and the quantizer output values by dots.
pe (a)
q
__
1
∆
∆
− __
∆
__ a
2 2
We note that if s.k/ is a constant signal the above assumptions are not true; they hold
in practice if fs.k/g is described by a function that significantly deviates from a constant
and 1 is adequately small, that is b is large. Figure 5.24 illustrates the quantization error
for a 16-level quantized signal. The signal eq .t/ is quite different from s.t/ and the above
assumptions are plausible.
If the probability density function of the signal to quantize is known, letting g denote the
function that relates s and eq , that is eq D g.s/, also called quantization error characteristic,
the probability density function of the noise is obtained as an application of the theory of
functions of a random variable, that yields
X ps .b/ 1 1
peq .a/ D 0 .b/j
<a< (5.30)
jg 2 2
b2g1 .a/
where g 1 .Ð/
is the inverse of the error function, or equivalently the set of values of s
corresponding to a given value of eq . We note that in this case the slope of the function g
is always equal to one, hence g 0 .b/ D 1, and from (5.15) for 1=2 < a < 1=2 we get
² ¦
L L
g .a/ D Qi a; i D ; : : : ; 1; 1; : : : ;
1
(5.31)
2 2
Finally,
L
X
2 1 1
peq .a/ D ps .Qi a/ <a< (5.32)
L
2 2
i D 2 ; i 6D0
It can be shown that, if 1 is small enough, the sum in (5.32) gives origin to a uniform
function peq , independently of the form of ps .
352 Chapter 5. Digital representation of waveforms
E[s 2 .k/]
3q D (5.33)
E[eq2 .k/]
12
Meq ' Megr ' (5.34)
12
For an exact computation that includes also the saturation error we need to know the
probability density function of s. The statistical power of eq is given by
Z C1
Meq DE[eq2 .k/] D [Q.a/ a]2 ps .a/ da
1
Z −sat Z −sat
D [Q.a/ a]2 ps .a/ da C [Q.a/ a]2 ps .a/ da (5.35)
−sat 1
Z 1
C [Q.a/ a]2 ps .a/ da :
−sat
In (5.35) the first term is the statistical power of the granular error, Megr , and the other two
terms express the statistical power of the saturation error, Mesat . Let us assume that ps .a/
5.2. Instantaneous quantization 353
is even and the characteristic is symmetrical, i.e. −i D −i and Qi D Qi ; then we get
8 9
> L >
<X2 1 Z −i Z −sat =
Megr D 2 .Qi a/2 ps .a/ da C .Q L a/2 ps .a/ da (5.36)
>
: i D1 −i1 −L 2 >
;
2 1
Z C1
Mesat D 2 .Q L a/2 ps .a/ da (5.37)
−sat 2
If the probability of saturation satisfies the relation P[js.k/j > −sat ] − 1, then Mesat ' 0;
introducing the change of variable b D Qi a, as −i D Qi C 1=2 and −i 1 D Qi 1=2,
we have
L=2 Z 1=2
X
Meq ' Megr D 2 b2 ps .Qi b/ db (5.38)
i D1 1=2
12
Meq ' (5.41)
12
assuming that −sat is large enough, so that the saturation error is negligible, and 1 is
sufficiently small to verify (5.39).
Ð
For example, if s.k/ 2 N 0; ¦s2 , then6
8 −sat
>
> 0:046
>
> ¦s
D2
>
>
<
−sat −sat
Psat D 2Q D 0:0027 D3
¦s >
> ¦s
>
>
>
> −sat
: 0:000063 D4
¦s
2. Choose L so that the signal-to-quantization error ratio assumes a desired value
Ms ¦2
3q D ' 2 s D 3k 2f L 2 (5.44)
Meq 1 =12
Example 5.2.1
Let s.k/ 2 U[smax ; smax ]. Setting −sat D smax , we get
−sat smax p
D D 3 H) .3q /d B D 6:02 b (5.47)
¦s ¦s
Example 5.2.2
Let s.k/ D smax cos.2³ f 0 Tc k C '/. Setting −sat D smax , we get
−sat smax p
D D 2 H) .3q /d B D 6:02 b C 1:76 (5.48)
¦s ¦s
Example 5.2.3
For s.k/ not limited in amplitude, and assuming Psat negligible for −sat D 4¦s , we get
.3q /d B D 6:02 b 7:2 (5.49)
45
40
35
30
b=8
Λq (dB)
25 7
20 6
15 5
10 b=8 7 6 5
Figure 5.25. Signal-to-quantization error ratio versus ¦s =−sat of a uniform quantizer for
granular noise only (dashed lines), for a Laplacian signal (dashed-dotted lines), and of a ¼-law
(¼ D 255) quantizer (continuous lines). The parameter b is the number of bits of the quantizer.
The expression of 3q for granular noise only is given by (5.46) and (5.64) for a uniform and a
¼-law quantizer, respectively.
The plot of 3q , given by (5.46), versus the statistical power of the input signal is
illustrated in Figure 5.25 for various values of b. We note that for values of ¦s near −sat
the approximation Meq ' Megr is no longer valid because Mesat becomes non-negligible. For
the computation of Mesat we need to know the probability density function of s and apply
(5.35). Assuming a Laplacian signal we obtain the curves also shown in Figure 5.25, that
coincide with the curves given by (5.46) for ¦s − −sat .
The optimization of 3q for the uniform quantization of a signal with a specified amplitude
distribution yields the results given in Table 5.5 [4]. We note that for the more dispersive
inputs the optimum value of 1 increases, and consequently the value of 3q decreases. We
also note that the quantizers obtained by the optimization procedure and by the method on
page 353 are in general different.
Example 5.2.4 Ð
For s.k/ 2 N 0; ¦s2 and b D 5, observing Table 5.5 we have optimum performance for
1=¦s D 0:1881, and consequently −sat D 2b1 1 D 3:05¦s . As shown in Figure 5.26,
the optimum value of 3q is obtained by determining the minimum of .Megr C Mesat /=Ms
as a function of ¦s =−sat . The optimum point depends on b: we have −sat D `¦s , where `
increases with b; in particular for b D 3 it turns out −sat D 2:3 ¦s , whereas for b D 8 we
obtain −sat D 3:94 ¦s .
356 Chapter 5. Digital representation of waveforms
Table 5.5 Optimal quantization step size and maximum corresponding value 3q of a
uniform quantizer for different ps .a/ (U: uniform, G: Gaussian, L: Laplacian, : gamma).
[From Jayant and Noll (1984).]
−3
x 10
6
M /M
eq s
4
3
Me /Ms
gr
1
Me /Ms
sat
0
0.25 0.3 0.35 0.4
σs/τsat
Figure 5.26. Determination of the optimum value of 3q for b D 5 and s.k/ 2 N .0; ¦s2 /.
We conclude this section observing that for a non-stationary signal, for example, a voice
signal, setting −sat D 4¦s , where ¦s2 is computed for a voiced spurt, yields 3q ' 33 dB
for b D 7, good enough for telephone communications. However, in an unvoiced spurt ¦s2
can be reduced by 20–30 dB, and consequently 3q is degraded by an amount equivalent
to 3–5 bit.
5.2. Instantaneous quantization 357
clock
threshold
adjust logic
reference
-
voltage serial
logic code bits
s(k)=V +
comparator
τ1
s(k)=V
τ2
b-1
2 to b-1
(b-1)-bit
decoding
τ3 logic
code word
τ2b-1
uniform quantizers are suboptimum. The second refers to non-stationary signals, e.g.,
speech, for which the ratio between instantaneous power (estimated over windows of
tenths of milliseconds) and average power (estimated over the whole signal) can ex-
hibit variations of several dB; moreover, the variation of the average power over dif-
ferent links is also of the order of 40 dB. Under these conditions a quantizer with non-
uniform characteristics, as that depicted for example in Figure 5.31, is more effective
because the signal-to-quantization error ratio 3q is almost independent of the instanta-
neous power. As also illustrated in Figure 5.31, for a non-uniform quantizer the quanti-
zation error is large if the signal is large, whereas it is small if the signal is small: as
a result the ratio 3q tends to remain constant for a wide dynamic range of the input
signal.
3. The most popular method, depicted in Figure 5.33, employs a uniform quantizer
having a large number of levels, with a step size equal to the minimum step size
of the desired non-uniform characteristic. Encoding of the non-uniformly quantized
signal yq is obtained by a look-up table whose input is the uniformly quantized
value xq .
Encoding. Let
s.k/ D e y.k/ sgn[s.k/] (5.53)
that is
y.k/ D ln js.k/j (5.54)
362 Chapter 5. Digital representation of waveforms
Figure 5.33. Non-uniform quantizer implemented digitally using a uniform quantizer with
small step size followed by a look-up table.
Figure 5.34. Non-uniform quantization by companding and uniform quantization: (a) PCM
encoder, (b) decoder.
and assume the sign of the quantized signal is equal to that of s.k/. The quantization of
y.k/ yields
yq .k/ D Q[y.k/] D ln js.k/j C eq .k/ (5.55)
The value c.k/ is given by the inverse bit mapping of yq .k/ and the sign of s.k/.
Decoder. Assuming c.k/ is correctly received, observing (5.55), the quantized version of
s.k/ is given by
sq .k/ D e yq .k/ sgn[s.k/]
D js.k/j sgn[s.k/]eeq .k/ (5.56)
D s.k/eeq .k/
If eq − 1, then
eeq .k/ ' 1 C eq .k/ (5.57)
5.3. Non-uniform quantizers 363
and
0.9
0.8
0.7
A=87.56
0.6
F(s)
0.5
A=1
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
s
0.9
µ =255
0.8
µ =50
0.7
0.6
µ =5
F(s)
0.5
0.4
µ =0
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
s
Curves of 3q versus the statistical power of the input signal are plotted for ¼ D 255 in
Figure 5.25. Note that in the saturation region they coincide with the curves obtained for a
5.3. Non-uniform quantizers 365
uniform quantizer with Laplacian input. We emphasize that also in this case 3q increases
by 6 dB with the increase of b by one. We also note that, if b D 8, 3q ' 38 dB for a wide
range of values of ¦s . An effect not shown in Figure 5.25 is that, by increasing ¼, the plot
of 3q becomes “flatter”, but the maximum value decreases.
Observation 5.2
In the standard non-linear PCM, a quantizer with 128 levels (7 bit/sample) is employed after
the compression; including also the sign we have 8 bit/sample. For a sampling frequency
of Fc D 8 kHz, this leads to a bit rate of the system equal to Rb D 64 kbit/s.
Digital compression
An alternative method to the compression-quantization scheme is illustrated by an example
in Figure 5.37. The relation between s.k/ and yq .k/ is obtained through a first multi-bit (5
in figure) quantization to generate xq ; then we have a mapping of the 5 bits of xq to the
3 bits of yq using the mapper (sign omitted) of Table 5.6.
For decoding, for each code word yq we select only one code word xq , which represents
the reconstructed value sq .
Using the standard compression laws, we need to approximate the compression functions
by piecewise linear functions, as shown in Figure 5.39. For encoding, a mapper with 12-bit
input and 8-bit output is given in Table 5.7. For decoding, we select for each compressed
Figure 5.37. Distribution of quantization levels for a 3-bit ¼-law quantizer with ¼ D 40.
366 Chapter 5. Digital representation of waveforms
code word a corresponding linear code word, as given in the third column of Table 5.7. In
the literature there are other non-linear PCM tables, that differ in the compression law or
in the accuracy of the codes [4].
Figure 5.38. Piecewise linear approximation of the A-law compression function (A D 87:6).
The 12-bit encoded input signals are mapped into 8-bit signals.
Assuming ps .a/ even, because of the symmetry of the problem we can halve the number
of variables to be determined
( by setting L
−i D −i i D 1; : : : ; 1
2 (5.68)
−0 D 0
L
Qi D Qi i D 1; : : : ; (5.69)
2
and
L=2 Z −i
X
Meq D 2 .Qi a/2 ps .a/ da (5.70)
i D1 −i1
368 Chapter 5. Digital representation of waveforms
@Meq L
D0 i D 1; : : : ; 1 (5.71)
@−i 2
@Meq L
D0 i D 1; : : : ; (5.72)
@Qi 2
5.3. Non-uniform quantizers 369
ps (a)
8
a
Q-4 Q-3 Q-2 Q-1 Q1 Q2 Q3 Q4
Figure 5.40. Decision thresholds and output levels for a particular ps .a/ (b D 3).
From
1 @Meq
D .Qi −i /2 ps .−i / .Qi C1 −i /2 ps .−i / (5.73)
2 @−i
(5.71) gives
ps .−i /[Qi2 C −i2 2Qi −i Qi2C1 −i2 C 2Qi C1 −i ] D 0 (5.74)
that is
Qi C Qi C1
−i D (5.75)
2
Conversely, the equation
Z −i
1 @Meq
D2 .Qi a/ ps .a/ da D 0 (5.76)
2 @Qi −i1
yields
Z −i
aps .a/ da
−
Qi D Z i1
−i (5.77)
ps .a/ da
−i1
In other words, (5.75) establishes that the optimal threshold lies in the middle of the interval
between two adjacent output values, and (5.77) sets Qi as the centroid of ps .Ð/ in the interval
[−i 1 ; −i ]. These two rules are illustrated in Figure 5.41.
Max algorithm
We present now the Max algorithm to determine the decision thresholds and the optimum
quantization levels.
1. Fixed Q1 “at random”, we use (5.77) to get −1 from the integral equation
Z −1
aps .a/ da
−0
Q1 D Z −1 (5.78)
ps .a/ da
−0
370 Chapter 5. Digital representation of waveforms
ps(a)
τi-1 τi τi+1 a
Qi Qi+1
Figure 5.41. Optimum decision thresholds and output levels for a given ps .a/.
then the parameters determined are optimum. Otherwise, if (5.81) is not satisfied we must
change our choice of Q1 in step 1) and repeat the procedure.
Lloyd algorithm
This algorithm uses (5.75) and (5.77), but in a different order.
1. We set a relative error ž > 0 and D0 D 1.
2. We choose an initial partition of the positive real axis:
L=2 Z
X −i
D j D E[eq2 ] D 2 .Qi a/2 ps .a/ da (5.83)
i D1 −i1
6. If
D j1 D j
<ž (5.84)
Dj
we have that
L=2 Z
X −i
Meq D 2 .Qi a/2 ps .a/ da
i D1 −i1
Z (5.86)
L=2
X −i
'2 ps .−i 1 / .Qi a/ da
2
i D1 −i1
and
Z −i
a da
− −i C −i 1
Qi D Zi1
−i D (5.88)
2
da
−i1
L
.1−i / D −i −i 1 i D 1; : : : ; (5.89)
2
L=2
X .1−i /3
Meq D 2 ps .−i 1 / (5.90)
i D1
12
It is now a matter of finding the minimum of (5.90) with respect to .1−i /, with the constraint
that the decision intervals cover the whole positive axis; this is obtained by imposing that
L=2
X Z C1
1=3 1=3
2 ps .−i 1 / .1−i / ' 2 ps .a/ da D K (5.91)
i D1 0
with Meq given by (5.90). By setting to zero the partial derivative of (5.92) with respect to
.1−i /, we obtain
.1−i /2 1=3 L
ps .−i 1 / C ½. ps .−i 1 // D 0 i D 1; : : : ; 1 (5.93)
4 2
that yields
p 1=3
.1−i / D 2 ½ ps .−i 1 / (5.94)
p K
½D (5.95)
2L
5.3. Non-uniform quantizers 373
hence
K 1=3 L
.1−i / D ps .−i 1 / i D 1; : : : ; 1 (5.96)
L 2
and the minimum value of Meq is given by
K3
Meq;opt D (5.97)
12L 2
For a quantizer optimized for a certain probability density function, and for a high number
of levels L D 2b (so that (5.85) holds), we have
Ms 22b
3q D D (5.98)
Meq;opt ff
where f f is a form factor related to the amplitude distribution of the normalized signal
sQ .k/ D s.k/=¦s ,
Z
KQ 3 C1
1=3
ff D KQ D psQ .a/ da (5.99)
12 1
p
In the Gaussian case, s.k/ 2 N .0; ¦s2 /, sQ .k/ 2 N .0; 1/, and f f D 2=. 3³ /.
Actually (5.96) indicates that the optimal thresholds are concentrated around the peak of
the probability density; moreover, the optimum value of 3q , according to (5.98), follows
the increment law of 6 dB per bit, as in the case of a quantizer granular error.
Observation 5.3
Equation (5.90) can be used to evaluate approximately Meq for a general quantizer charac-
teristic, even of the A-law and ¼-law types. In this case, from Figure 5.32, the quantization
step size 1 of the uniform quantizer is related to the compression law according to the
relation
where F 0 is the derivative of F. Obviously (5.100) assumes that F 0 does not vary consid-
erably in the interval .−i 1 ; −i ].
Substituting (5.100) in (5.90) we have
L=2
X ½2
1 .1−i /
Meq D 2 ps .−i 1 / (5.101)
i D1
F 0 .−i 1 / 12
for both uniform and non-uniform quantizers [4]. The optimum value of 3q follows the
6b law only in the case of non-uniform quantizers for b ½ 4. In the case of uniform quan-
tizers, with increasing b the maximum of 3q occurs for a smaller ratio ¦s =−sat (due to the
saturation error): this makes 13q vary with b and in fact it increases.
Finally, we consider what happens if a quantizer, optimized for a specific input
distribution, has a different type of input. For example, a uniform quantizer, best
2 4 8 16
i −i Qi −i Qi −i Qi −i Qi
1 1 0.798 0.453 0.982 0.501 0.245 0.258 0.128
2 1 1.510 1.050 0.756 0.522 0.388
3 1.748 1.344 0.800 0.657
4 1 2.152 1.099 0.942
5 1.437 1.256
6 1.844 1.618
7 2.401 2.069
8 1 2.733
Meq 0.363 0.117 0.0345 0.00955
3q (dB) 4.40 9.30 14.62 20.20
5.3. Non-uniform quantizers 375
2 4 8 16
i −i Qi −i Qi −i Qi −i Qi
1 1 0.707 1.127 0.420 0.533 0.233 0.264 0.124
2 1 1.834 1.253 0.833 0.567 0.405
3 2.380 1.673 0.920 0.729
4 1 3.087 1.345 1.111
5 1.878 1.578
6 2.597 2.178
7 3.725 3.017
8 1 4.432
Meq 0.500 0.1761 0.0545 0.0154
3q (dB) 3.01 7.54 12.64 18.12
2 4 8 16
i −i Qi −i Qi −i Qi −i Qi
1 1 0.577 1.268 0.313 0.527 0.155 0.230 0.073
2 1 2.223 1.478 0.899 0.591 0.387
3 3.089 2.057 1.051 0.795
4 1 4.121 1.633 1.307
5 2.390 1.959
6 3.422 2.822
7 5.128 4.061
8 1 6.195
Meq 0.6680 0.2326 0.0712 0.0196
3q (dB) 1.77 6.33 11.47 17.07
for a uniform input, will have very low performance for an input signal with a
very dispersive distribution; on the contrary, a non-uniform quantizer, optimized for
a specific distribution, can have even higher performance for a less dispersive input
signal.
376 Chapter 5. Digital representation of waveforms
16
Γ
14
12
10
L
∆ Λq (dB)
8 Γ
L
6
G
4
G
U
0
1 2 3 4 5 6 7
b
Figure 5.42. Performance comparison of uniform (dashed line) and non-uniform (continuous
line) quantizers, optimized for a specific probability density function of the input signal. Input
type: uniform (U), Laplacian (L), Gaussian (G) and gamma (0) [4]. [From Jayant and Noll
(1984).]
Figure 5.43. Comparison of the signal-to-quantization error ratio for uniform quantizer
(dashed-dotted line), ¼-law (continuous line) and optimum non-uniform quantizer (dotted
line), for Laplacian input. All quantizers have 32 levels (b D 5) and are optimized for ¦s D 1.
5.4. Adaptive quantization 377
The 0 quantizers have performance that is almost independent of the type of input.
The performance also does not change for a wide range of the signal variance, as their
characteristic is of logarithmic type (see Section 5.3.1).
A comparison between uniform and non-uniform quantizers with Laplacian input is
given in Figure 5.43. All quantizers have 32 levels (b D 5) and are determined using:
a) Table 5.5 for the uniform Laplacian type quantizer; b) Table 5.9 for the non-uniform
Laplacian type quantizer; c) the ¼ (¼ D 255) compression law of Figure 5.36 with
−sat =¦s D 1. We note that the optimum non-uniform quantizer gives best performance,
even if this happens in a short range of values ¦s ; for a decrease in the input statis-
tical power, performance decreases according to the law 10 log Ms D 20 log ¦s (dB), as
we can see from (5.98). Only a logarithmic quantizer is independent of the input signal
level.
General scheme
The overall scheme is given in Figure 5.44, where c.k/ Q 6D c.k/ if errors are introduced
by the binary channel. For a uniform quantizer, the idea is that of varying with time the
quantization step size 1.k/ so that the quantizer characteristic adapts to the statistical power
of the input signal.
If 1.k/ is the quantization step size at instant k, with reference to Figure 5.21 the
quantizer characteristic is defined as
8
> 1 L
>
< i C 1.k/ i D ; : : : ; 1
2 2
output levels: Qi .k/ D
>
> 1 L (5.104)
: i 1.k/ i D 1; : : : ;
2 2
L L
thresholds: −i .k/ D i 1.k/ i D C 1; : : : ; 1; 0; 1; : : : ; 1
2 2
If 1opt is the optimum value of 1 for a given amplitude distribution of the input signal
assuming ¦s D 1 (see Table 5.5), and ¦s .k/ is the standard deviation of the signal at instant
k, then we can use the following rule
For a non-uniform quantizer, we need to change the levels and thresholds according to
the relations:
where fQi;opt g and f−i;opt g are given in Tables 5.8, 5.9, and 5.10 for various input amplitude
distributions.
As illustrated in Figure 5.45, an alternative to the scheme of Figure 5.44 is the following:
the quantizer is fixed and the input is scaled by an adaptive gain g, so that a signal fy.k/g
is generated with a constant statistical power, for example, ¦ y2 D 1. Therefore we let
1
g.k/ D (5.107)
¦s .k/
However, both methods require computing the statistical power ¦s2 of the input signal.
The adaptive quantizers are classified as:
ž feedback, if ¦s is estimated by observing fsq .k/ D Q[s.k/]g or fc.k/g, i.e. the signals
at the output of the quantizer.
1. because of digital channel errors on both c.k/ and .¦s .k//q (or gq .k/) it may happen
that sQq .k/ 6D sq .k/;
2. we need to determine the update frequency of ¦s .k/, that is what frequency is required
to sample ¦s , and how many bits must be used to represent .¦s .k//q ;
3. the system bit rate is now the sum of the bit rate of c.k/ and .¦s .k//q (or gq .k/).
Figure 5.46. APCM scheme with feedforward adaptive quantizer: a) encoder, b) decoder.
(a)
Figure 5.47. APCM scheme with feedforward adaptive gain and fixed quantizer: a) encoder,
b) decoder.
380 Chapter 5. Digital representation of waveforms
The data sequence that represents f.¦s .k//q g or fgq .k/g is called side information. Two
methods to estimate ¦s2 .k/ are given in Section 1.11.1. For example, using a rectangular
window of K samples, from (1.462) we have
1 X
k
¦s2 .k D/ D s 2 .n/ (5.108)
K nDk.K 1/
where D expresses a certain lead of the estimate with respect to the last available sample:
typically D D .K 1/=2 or D D K 1. If D D K 1, K samples need to be stored
in a buffer and then the average power must be computed: obviously, this introduces a
latency in the coding system that is not always tolerable. Moreover, windows usually do
not overlap, hence ¦s is updated every K samples.
For an exponential filter instead, from (1.468) we have
Typically in this case we choose D D 0. To determine the update frequency of ¦s2 .k/, we
.1a/
recall that the 3 dB bandwidth of ¦s2 .k/ in (5.109) is equal to B¦ D .2³ Tc / , for a > 0:9.
Typically, however, we prefer to determine a from the equivalence (1.471) with the length
of the rectangular window, that gives a D 1 K 11 : this means decimating, quantizing,
and coding the values given by (5.109) every K instants. In Table 5.11 we give, for three
values of a, the corresponding values of K 1 and B¦ for 1=Tc D 8 kHz.
Performance
With the constraint that ¦s varies within a specific range, ¦min ¦s ¦max , in order to
keep 3q relatively constant for a change of 40 dB in the input level, it must be
¦max
½ 100 (5.110)
¦min
Actually ¦min controls the quantization error level for small input values (idle noise),
whereas ¦max controls the saturation error level.
For speech signals sampled at 8 kHz, Table 5.12 shows the performance of different
fixed and adaptive 8-level (b D 3) quantizers. The estimate of the signal power is obtained
by a rectangular window with D D K 1; the decimation and quantization of ¦s2 .k/
Table 5.11 Time constant and bandwidth of a discrete-time exponential filter with
parameter a and sampling frequency 8 kHz.
1 25 D 0:9688 32 40
1 26 D 0:9844 64 20
1 27 D 0:9922 128 10
5.4. Adaptive quantization 381
Table 5.12 Performance comparison of fixed and adaptive quantizers for speech.
non-uniform Q
¼ law (¼ D 100, −sat =¦s D 8) 9.5 – –
Gaussian (3q;opt D 14:6 dB) 7.3 15 12.1
Laplacian (3q;opt D 12:6 dB) 9.9 13.3 12.8
uniform Q
Gaussian (3q;opt D 14:3 dB) 6.7 14.7 11.3
Laplacian (3q;opt D 11:4 dB) 7.4 13.4 11.5
are not considered. Although b D 3 is a small value to draw conclusions, we note that
using an adaptive Gaussian quantizer with K D 128 we get 8 dB improvement over a
non-adaptive quantizer. If K − 128 the side information becomes excessive, conversely
there is a performance loss of 3 dB for K D 1024.
Pk1
this implies that the estimate (5.108) becomes ¦s2q .k/ D 1=K nD.k1/.K 1/ sq .n/, with
2
a lag of one sample. Likewise, the recursive estimate (5.109) becomes ¦s2q .k/ D a¦s2q .k
1/ C .1 a/sq2 .k 1/. Because of the lag in estimating the level of the input signal and
the computational complexity of the method itself, we present now an alternative method
to estimate ¦s adaptively.
Estimate of σs (k)
For an input with ¦s D 1 we compute the discrete amplitude distribution of the code words
for a quantizer with 2b levels and jc.k/j 2 f1; 2; : : : ; L=2g. As illustrated in Figure 5.49
for b D 3, let
8 Z −opt;1
>
> P[jc.k/j D 1] D 2 ps .a/ da D pc1
>
>
>
< −0 D0
:: ::
> : : (5.111)
> Z C1
>
>
>
: P[jc.k/j D 4] D 2 ps .a/ da D pc4
−opt;3
If ¦s changes suddenly, the distribution of jc.k/j will be very different with respect to
(5.111). For example, if ¦s < 1 it will be P[jc.k/j D 1] × pc1 , while P[jc.k/j D 4] − pc4 .
0.8
0.7
0.6
2
0.5 ps (a) σs =1
0.4
0.3
0.2
0.1 Q1,opt Q2,opt Q3,opt Q4,opt
0
τ0,opt τ1,opt τ2,opt τ3,opt a
ps (a)
0.8
0.7 2
σs<1
0.6
0.5
0.4
0.3
0.2
0.1
0
τ1 (k) τ2 (k) τ3 (k) a
Figure 5.49. Output levels and optimum decision thresholds for Gaussian s.k/ with unit
variance (b D 3).
5.4. Adaptive quantization 383
Figure 5.50. Adaptive quantizer where ¦s is estimated using the code words.
The objective is therefore that of changing ¦sq .k/ so that the optimal distribution is obtained
for jc.k/j. The algorithm proposed by Jayant [4], illustrated in Figure 5.50, is given by
in which fp[i]g, i D 1; : : : ; L=2, are suitable parameters. For example, if it is jc.k 1/j D 1,
then ¦sq must decrease to reduce the quantizer range, thus p[1] < 1; if instead it is
jc.k 1/j D L=2, then ¦sq must increase to extend the quantizer range, thus p[L=2] > 1.
In practice, what we do is vary ¦sq by small steps imposing bounds to the variations,
that is:
The problem consists now in choosing the parameters fp[i]g, i D 1; : : : ; L=2. Intuitively it
should be
pc L=2
.p[1]/ pc1 .p[2]/ pc2 : : : .p[L=2]/ D1 (5.114)
from which
E[ln ¦sq ].k/ D E[ln p[jc.k 1/j]] C E[ln ¦sq .k 1/] (5.116)
In steady state we expect that E[ln ¦sq .k/] D E[ln ¦sq .k 1/], therefore it must be
L=2
X
E[ln p[jc.k 1/j]] D pci ln p[i] D 0 (5.117)
i D1
as in (5.114).
384 Chapter 5. Digital representation of waveforms
Based on numerous tests on speech signals, Jayant also gave the values of the parameters
fp[i]g, i D 1; : : : ; L=2. Let
² ¦
2i 1 1 3
q.i/ D 2 ; ;:::;1 (5.118)
L 1 L 1 L 1
In Figure 5.51 the values of fp[i]g are given in correspondence of fq.i/g, i D 1; : : : ; L=2
[4]. For example, for L D 8 the values of fp[i]g, i D 1; : : : ; 4, are in correspondence of the
values of fq.i/g D f1=7; 3=7; 5=7; 1g. Therefore p[1] is in the range from 0.8 to 0.9, and
p[4] in the range from 1.8 to 2.9; we note that there is a large interval of possible values
for p[i], especially if the index i is large.
Summarizing, at instant k, ¦sq .k/ is known, and by (5.106) the decision thresholds f−i .k/g
are also known. From the input sample s.k/, c.k/ is produced by the quantizer characteristic
(see Figure 5.52). Then ¦sq .k C 1/ is computed by (5.112) and the thresholds are updated:
the quantizer is now ready for the next sample s.k C 1/.
At the receiver, the possible output values are also known from the knowledge of ¦sq .k/
(see (5.106)). At the reception of c.k/ the index i in (5.106) is determined and, conse-
quently, sq .k/; in turn the receiver updates the value of ¦sq .k C 1/ by (5.112). Experi-
mental measurements on speech signals indicate that this feedback adaptive scheme offers
performance similar to that of a feedforward scheme. An advantage of the algorithm of
Jayant is that it is sequential, thus it can adapt very quickly to changes in the mean sig-
nal level; on the other hand, it is strongly affected by the errors introduced by the binary
channel.
p
3
0
0 1 q
Figure 5.51. Interval of the multiplier parameters in the quantization of the speech signal as
a function of the parameters fq.i/g [4]. [From Jayant and Noll (1984).]
5.5. Differential coding (DPCM) 385
010
Q (k)
3 p[3]
001
Q (k)
2 p[2]
000
Q (k)
1 p[1]
- τ3(k) - τ2(k) - τ1(k) τ1(k) τ2(k) τ3(k)
100
- Q (k) s
p[1] 1
101
- Q (k)
p[2] 2
110
- Q (k)
p[3] 3
111
- Q (k)
p[4] 4
Figure 5.52. Input--output characteristic of a 3-bit adaptive quantizer. For each output level
the PCM code word and the corresponding value of p are given.
7 In the following sections, as well as in some schemes of the previous section on adaptive quantization, when
processing of the input samples fs.k/g is involved, it is desirable to perform the various operations in the digital
domain on a linear PCM binary representation of the various samples, obtained by an ADC. Obviously, the
finite number of bits of this preliminary quantization should not affect further processing. To avoid introducing
a new signal, the preliminary conversion by an ADC is omitted in all our schemes.
8 The considerations presented in this section are valid for any predictor, even non-linear predictors.
386 Chapter 5. Digital representation of waveforms
s(k) + f(k)
-
c ^s(k)
(a) (b)
Figure 5.54. (a) Prediction error filter; (b) Inverse prediction error filter.
then
O
S.z/ D C.z/S.z/ (5.122)
and
Figure 5.55. DPCM scheme with quantizer inserted in the feedback loop: (a) encoder,
(b) decoder.
Encoder:
f .k/ D s.k/ sO .k/ (5.126)
X
N
sO .k C 1/ D ci sq .k C 1 i/ (5.131)
i D1
In other words, the quantized prediction error is transmitted over the binary channel. Let
eq; f .k/ D f q .k/ f .k/ (5.132)
be the quantization error and 3q; f the signal-to-quantization error ratio
E[ f 2 .k/]
3q; f D 2 .k/]
(5.133)
E[eq; f
388 Chapter 5. Digital representation of waveforms
Recalling (5.98) we know that for an optimum quantizer, with the normalization by the
standard deviation of f f .k/g, 3q; f is only a function of the number of bits and of the
probability density function of f f .k/g.
From (5.128) and (5.132), using (5.126), we have
To summarize, the reconstruction signal is different from the input signal, sq .k/ 6D s.k/, and
the reconstruction error (or noise) depends on the quantization of f .k/, not of s.k/. Con-
sequently, if M f < Ms then also Meq; f < Meq and the DPCM scheme presents an advantage
over PCM. Observing (5.134) the signal-to-noise ratio is given by
Ms Ms M f
3q D D (5.135)
Meq; f M f Meq; f
Given
Ms
Gp D (5.136)
Mf
called prediction gain, it follows that
3q D G p 3q; f (5.137)
where observing (5.133) 3q; f depends on the number of quantizer levels, which in turn
determine the transmission bit rate, whereas G p depends on the predictor complexity and
on the correlation sequence of the input fs.k/g. We observe that the input to the filter that
yields sO .k/ in (5.129) is fsq .k/g and not fs.k/g; this will cause a deterioration of G p with
respect to the ideal case fsq .k/ D s.k/g. This decrease will be more prominent the larger
feq; f g will be with respect to fs.k/g.
If we ignore the dependence of G p on feq; f .k/g, (5.137) shows that to obtain a given 3q
we can use a quantizer with a few levels, provided the input fs.k/g is highly predictable.
Therefore G p can be sufficiently high also for a predictor with a reduced complexity.
For the quantizer, assuming the distribution of f f .k/g is known, 3q; f is maximized
by selecting the thresholds and the output values according to the techniques given in
Section 5.3. In particular the statistical power of f f .k/g, useful in scaling the quantizer
characteristic, can be derived from (5.136), assuming known Ms and G p ,
Ms
Mf D (5.138)
Gp
Regarding the predictor, once the number of coefficients N is fixed, we need to determine
the coefficients fci g, i D 1; : : : ; N , that minimize M f . For example in the case N D 1,
recalling (2.91), the optimum value of c1 is given by ².1/, the correlation coefficient of
the input signal at lag 1. Then we have
1
Gp D (5.139)
1 ² 2 .1/
ignoring the effect of the quantizer, that is for fsq .k/ D s.k/g.
5.5. Differential coding (DPCM) 389
Figure 5.56. (a) Reconstruction signal for a DPCM, with (b) a 6 level quantizer.
We give in Table 5.13 the values of G p for three values of ².1/. We note that, for an
input having ².1/ D 0:85, a simple predictor with one coefficient yields a prediction gain
equivalent to one bit of the quantizer: consequently, given the total 3q , the DPCM scheme
allows us to use a transmission bit rate lower than that of PCM. Evidently, for an input
with ².1/ D 0 there is no advantage in using the DPCM scheme.
For a simple predictor with N D 1 and c1 D 1, hence sO .k/ D sq .k 1/, Figure 5.56a
illustrates the behavior of the reconstruction signal after DPCM with the six-level quantizer
shown in Figure 5.56b. We note that the minimum level of the quantizer still determines
the statistical power of the granular noise in fsq .k/g; the maximum level of the quantizer is
instead related to the slope overload distortion in the sense that if Q L=2 is not sufficiently
large, as shown in Figure 5.56a, the output signal cannot follow the rapid changes of the
input signal. In the specific case, being Q L=2 =Tc < max js.k/ s.k 1/j, fsq .k/g presents
a slope different from that of fs.k/g in the instants of maximum variation.
Figure 5.57. DPCM scheme with quantizer inserted after the feedback loop: a) encoder, b)
decoder.
noise present in fsq .k/g. An alternative consists in using the scheme of Figure 5.57, where
the following relations hold.
Encoder:
f .k/ D s.k/ sO .k/ (5.140)
Decoder:
sq;o .k/ D f q .k/ C sOo .k/ (5.144)
X
N
sOo .k C 1/ D ci sq;o .k C 1 i/ (5.145)
i D1
At the encoder, sO .k/ is obtained from the input signal without errors. However, the
prediction signal reconstructed at the decoder is sOo .k/ 6D sO .k/. In fact, from (5.144) and
5.5. Differential coding (DPCM) 391
Observation 5.4
Note that the same problem mentioned above may occur also in the scheme of Figure 5.55
because of errors introduced by the binary channel, though to a lesser extent as compared
to the scheme of Figure 5.57, as the signal f f q .k/g at the encoder is affected by a smaller
disturbance. For both configurations, however, the inverse prediction error filter must sup-
press the propagation of such errors in a short time interval. This is difficult to achieve
if the transfer function 1=[1 C.z/] has poles near the unit circle, and consequently the
impulse response is very long.
X
N
sO .k/ D ci sq .k i/ (5.147)
i D1
where sq .k/ is the reconstruction signal, which in the case of a feedback quantizer system is
given by sq .k/ D s.k/ C eq; f .k/. For the design of the predictor, we choose the coefficients
fci g that minimize the statistical power of the prediction error,
c D [c1 ; : : : ; c N ]T (5.149)
M f D Ms .1 copt
T
ρ/ (5.155)
The difficulty of this formulation is that to determine the solution we need to know the
value of 3q (see (5.153)). We may consider the solution with the quantizer omitted, hence
3q D 1, and depends only on the second order statistic of fs.k/g. In this case some
efficient algorithms to determine c and M f in (5.154) and (5.155) are given in Sections 2.2.1
and 2.2.2.
8 Assuming for fs.k/g and feq; f .k/g the correlations are expressed by (5.27) and (5.28), we get
and
1 1
Gp D D (5.159)
1 copt;1 ².1/ ² 2 .1/
1
1 C 1=3q
The above relations show that if 3q is small, that is, if the system is very noisy, then copt;1
is small and G p tends to 1. Only for 3q D 1 it is copt;1 D ².1/.
It may occasionally happen that a suboptimum value is assigned to c1 : we will try to
evaluate the corresponding value of G p . For N D 1 and any c1 it is
M f D E[.s.k/ c1 .s.k 1/ C eq; f .k 1///2 ]
(5.160)
' Ms .1 2c1 ².1/ C c12 / C c12 Meq; f
As from (5.135) it follows Meq; f D Ms =3q , where from (5.137) 3q D G p 3q; f , observing
(5.160) we obtain
1 .c12 =3q; f /
Gp D (5.161)
1 2c1 ².1/ C c12
hence 3q; f depends only on the number of quantizer levels.
Note that (5.161) allows the computation of the optimum value of c1 for a predictor with
N D 1 in the presence of the quantizer: however, the expression is complicate and will not
be given here. Rather we will derive G p for two values of c1 .
1. For c1 D ².1/ we have
1 ² 2 .1/
Gp D 1 (5.162)
1 ² 2 .1/ 3q; f
where the factor .1 ² 2 .1/=3q; f / is due only to the presence of the quantizer.
2. For c1 D 1 we have
1 1
Gp D 1 (5.163)
2.1 ².1// 3q; f
We note that the choice c1 D 1 leads to a simple implementation of the predictor: however,
this choice results in G p > 1 only if ².1/ > 1=2.
Various experiments with speech have demonstrated that for very long observations, of
the order of one second, the prediction gain for a fixed predictor is between 5 and 7 dB,
and saturates for N ½ 2; in fact, speech is a non-stationary signal and adaptive predictors
should be used.
The vector c D [c1 ; : : : ; c N ]T is chosen to minimize M f over short intervals within which the
signal fs.k/g is quasi-stationary. Speech signals have slowly-varying spectral characteristics
and can be assumed as stationary over intervals of the order of 5–25 ms.
Also for ADPCM two strategies emerge.
Figure 5.58. ADPCM scheme with feedforward adaptation of both predictor and quantizer:
(a) encoder, (b) decoder.
where ¼1 1 controls the stability of the decoder if, because of binary channel errors, it
occasionally happens fQq .k/ 6D f q .k/.
Table 5.14 gives the algorithm, while Figure 5.61 illustrates the implementation; in
decoding the same equations are used with fQq .k/ in place of f q .k/ and therefore sQq .k/ in
place of sq .k/.
396 Chapter 5. Digital representation of waveforms
Figure 5.59. (a) Speech level measured on windows of 128 samples, and corresponding
prediction gain Gp for: (b) a fixed predictor (N D 3), (c) an adaptive predictor (N D 10). For
these measurements the quantizer was removed.
Initialization
c.0/ D 0
sq .0/ D 0 (or sq .0/ D s.0/)
sO .0/ D 0
For k D 0; 1; : : :
f .k/ D s.k/ sO .k/
f q .k/ D Q[ f .k/]
c.k C 1/ D ¼1 c.k/C µ f q .k/sq .k/
sq .k/ D sO .k/ C f q .k/
sO .k C 1/ D cT .k C 1/sq .k C 1/
Observation 5.5
For a stationary input, as for example a modem signal, the LMS adaptive prediction can
be used to easily determine the predictor coefficients; once the convergence is reached,
however, it is better to switch off the adaptation. This observation does not apply to speech,
which presents characteristics that may change very rapidly with time.
5.5. Differential coding (DPCM) 397
Figure 5.60. ADPCM scheme with feedback adaptation for both predictor and quantizer:
(a) encoder, (b) decoder.
398 Chapter 5. Digital representation of waveforms
Performance
Objective and subjective experiments conducted on speech signals sampled at 8 kHz have
indicated that adopting ADPCM rather than PCM leads to a saving of 2 to 3 bits in encoding:
for example, a 5-bit ADPCM scheme yields the same quality as a 7-bit PCM. Obviously
in both cases the quantizer is non-uniform.
All-pole predictor
The predictor considered in the previous section implies an inverse prediction error filter
or synthesis filter (see Figure 2.9) with transfer function
S.z/ 1
H .z/ D D (5.167)
F.z/ 1 C.z/
which has only poles (neglecting zeros at the origin). The predictor refers to an input model
whose samples are given by
X
N
s.k/ D ai s.k i/ C w.k/ (5.168)
i D1
1. fw.k/g is white noise. Then (5.168) implies an AR(N ) model for Pthe input. The
N
optimum predictor that minimizes E[.s.k/ sO .k//2 ] is C.z/ D nD1 an z n and
it yields f .k/ D w.k/.
2. fw.k/g is a periodic sequence of impulses,
X
C1
w.k/ D A Žkn P (5.169)
nD1
All-zero predictor
For an MA input model, the prediction signal is given by
q
X
sO .k/ D bi f .k i/ (5.170)
i D1
Correspondingly from (5.124), for sq .k/ D s.k/, the synthesis filter has a FIR all zero
transfer function
q
X
H .z/ D 1 C bi z i (5.171)
i D1
Incidentally we note that an approximate LMS adaptation of the coefficients fbi g is given by
bi .k C 1/ D bi .k/ C ¼b f q .k/ f q .k i/ i D 1; : : : ; q (5.172)
Pole-zero predictor
The general case refers to an ARMA( p; q) input model. In this case
p
X q
X
sO .k/ D ci s.k i/ C bi f .k i/ (5.173)
i D1 i D1
Correspondingly, we have
q
X
1C bi z i
i D1
H .z/ D p (5.174)
X
i
1 ci z
i D1
400 Chapter 5. Digital representation of waveforms
+
sq(k)=s(k)
c +
(a) (b)
The equations (5.173) and (5.174) are illustrated in Figure 5.62. For the LMS adaptation
of the coefficients in (5.173), we refer to (5.166) for the coefficients fci g and to (5.172)
for the coefficients fbi g. This configuration was adopted by the ADPCM G.721 standard at
32 kbit/s (see Table 5.16), that uses an LMS adaptive predictor with 2 poles and 4 zeros;
the 5 bit quantizer is adapted by the Jayant scheme.
Pitch predictor
An alternative structure exploits the quasi-periodic behavior, of period P, of voiced spurts
of speech. In this case it is convenient to use the estimate
sO .k/ D sO` .k/ C sOs .k/ (5.175)
where
APC
Because P is usually in the range from 40 to 120 samples, for speech signals sampled at
8 kHz the whole predictor in (5.179) is in fact an all-pole type with a very high order N .
The subdivision into two terms, even if not optimum, has the advantage of allowing a very
simple computation of the various parameters.
1. Computation of long-term predictor through minimization of the cost function
min E[ f ` .k/2 ] D E[.s.k/ þs.k P//2 ] (5.180)
þ;P
It follows:10
P D arg max ².n/ (5.181)
n6D0
From the estimate of the autocorrelation sequence of fs.k/g, once P and þ are determined,
the autocorrelation sequence of f f ` .k/g is easily computed. Then the coefficients fci g of the
long-term predictor can be obtained by solving a system of equations similar to (5.154),
where and ρ depend on the autocorrelation coefficients of f f ` .k/g.
Experimental measurements, initially conducted by Atal [5], have demonstrated that
adapting the various coefficients by the feedforward method every 5 ms we get high quality
speech reproduction with an overall bit rate of only 10 kbit/s, using only a one-bit quantizer.
The encoder and decoder schemes are given in Figure 5.64: they form the adaptive
predictive coding (APC) scheme which differs from the DPCM for the inclusion of the
long-term predictor.
For voiced speech, the improvement given by the long-term predictor to lowering the
LPC residual is shown in Figure 5.65. Without the long-term predictor the LPC residual
presents a peak at every pitch period P; as shown in Figure 5.65c, these peaks are removed
by the long-term predictor. The frequency-domain representation of the three signals in
Figure 5.65 is given in Figure 5.66. We note that plots in Figures 5.66a and 5.66b exhibit
some spectral lines, due to the periodic behavior of the corresponding signals in the time
domain, whereas these lines are attenuated in the plot of Figure 5.66c.
5.5. Differential coding (DPCM) 403
Figure 5.65. (a) Voiced speech, (b) LPC residual, (c) LPC residual with long-term predictor.
Figure 5.66. DFT of signals of Figure 5.65: (a) voiced speech, (b) LPC residual, (c) LPC
residual with long-term predictor.
404 Chapter 5. Digital representation of waveforms
Other long-term predictors with 2 or 3 coefficients have been proposed: although they
are more effective, the determination of their parameters is much more complicate than the
approach (5.180). There are also numerous methods that are more robust and effective than
(5.181) to determine the pitch period P. To avoid this very laborious computation, all-pole
predictors have been proposed with more than 50 coefficients, thus partly assimilating the
long-term predictor in the overall predictor.
Two improvements with respect to the basic APC scheme are outlined in the following
observations [3].
Observation 5.6
From the standpoint of perception, it is important to have a signal-to-noise ratio that is
constant in the frequency domain: this yields the so-called spectral shaping of the error,
obtained by filtering the residual error so that it is reduced at frequencies where the signal
has low energy and enhanced at frequencies where the signal has high energy.
Observation 5.7
In APC, parameters associated with the prediction coefficients fci g i D 1; : : : ; N , are nor-
mally sent: for example reflection coefficients (PARCOR), area functions or line spectrum
pairs (LSP).
Figure 5.67 shows the effect of oversampling on fx.k/g. In particular, from (5.187)
and from Figure 5.67a we note that by increasing F0 the samples fx.k/g become more
correlated; moreover, from (5.188) we have that the spectrum of fx.k/g presents images
that are more spaced apart from each other.
Let us now quantize fx.k/g. With reference to Figure 5.68, let xq .k/ be the quantized
signal and eq .k/ the corresponding quantization error
xq .k/ D x.k/ C eq .k/ (5.189)
Figure 5.67. Effects of oversampling for two values of the oversampling factor F0 .
406 Chapter 5. Digital representation of waveforms
For feq .k/g white with statistical power Meq , depending only on the number of quantizer
levels (see (5.44) and (5.98)), we have
T0
Peq . f / D Meq (5.190)
F0
From (5.189), by filtering fxq .k/g with an ideal lowpass filter g having bandwidth B and
unit gain, at the output we will have
T0 f
Peq;o . f / D Meq rect (5.193)
F0 2B
Then
Meq
Meq;o D (5.194)
F0
3q;o D 3q F0 (5.195)
1 F0
Rb D b Db (5.196)
Tc T0
1
× 2B (5.197)
Tc
and a quantizer with only two levels (b D 1). Then the encoder bit rate is equal to the
sampling rate,
1
Rb D (bit/s) (5.198)
Tc
The high value of F0 implies a high predictability of the input sequence: therefore a predictor
with a few coefficients gives a high prediction gain and the quantizer can be reduced to
the simplest case of b D 1. We note, moreover, that one-bit code words eliminate the need
for framing of the code words at the transmitter and at the receiver, thus simplifying the
overall system.
For a predictor with only one coefficient c1 , the coding scheme, which is illustrated in
Figure 5.69, is called a linear delta modulator (LDM). The following relations hold.
Encoder:
f .k/ D s.k/ sO .k/ (5.199)
(
1 .c.k/ D 1/ if f .k/ ½ 0
f q .k/ D (5.200)
1 .c.k/ D 0/ if f .k/ < 0
h i
sO .k C 1/ D c1 sO .k/ C f q .k/ (5.201)
Decoder:
(
1 c.k/ D 1
f q .k/ D (5.202)
1 c.k/ D 0
LDM implementation
Digital implementation. An implementation of the scheme of Figure 5.69 for c1 D 1 is
given in Figure 5.70a. Letting b.k/ D sgn[ f .k/], this implementation involves the ac-
cumulation of fb.i/g, i k, by an up-down counter (ACC). The accumulated value is
proportional to sq .k/.
granular noise
s(k)
s(k-1)
where
(
po > 1 if c.k/ D c.k 1/ (slope overload )
pD (5.208)
pg < 1 if c.k/ D
6 c.k 1/ (granular noise)
We note that in this scheme 1.k/ also depends on c.k/. The following relations between
the signals of Figure 5.72 hold.
Encoder:
f .k/ D s.k/ sO .k/ (5.209)
b.k/ D sgn f .k/ (5.210)
1.k/ D p1.k 1/ (5.211)
f q .k/ D 1.k/b.k/ (5.212)
sO .k C 1/ D c1 [Os .k/ C f q .k/] (5.213)
Decoder:
1.k/ D p1.k 1/ (5.214)
f q .k/ D 1.k/b.k/ (5.215)
sq .k/ D c1 sq .k 1/ C f q .k/ (5.216)
sq;o .k/ D sq Ł g.k/ (5.217)
Typical values for po and pg are given by 1:25 < po < 2 and po pg 1. A graphic
representation of ADM encoding is shown in Figure 5.73.
Experiments on speech signals show that by doubling the sampling rate in the ADM we
get an improvement of 10 dB in 3q . In some applications ADM encoding is preferred to
PCM because of its simple implementation, in spite of the higher bit rate.
s(k)
s(k-1)
slope overload ^
s(k)
distortion
0 Tc t
b(k) +1 +1 +1 -1 +1 -1 +1 -1 +1 -1
where 0 < Þ 1, and D1 and D2 are suitable positive parameters with D2 × D1 . The
value Þ controls the speed of adaptation. The main difficulty of this scheme is the sensitivity
of fsq .k/g to transmission errors, especially for Þ D 1; choosing Þ < 1 mitigates the effects
of transmission errors, at the expense of worse performance.
If p1 and p2 are real with 0 < p1 , p2 1, the decoder is equivalent to the cascade of two
leaky integrators. The problem now is to determine the slope overload condition from the
sequence fc.k/g.
the source fs.k/g, for example speech, is modeled by an AR.N / linear system
¦
H .z/ D (5.223)
X
N
i
1 ci z
i D1
Regular pulse excited (RPE). The excitation signal consists of a train of undersampled
impulses, derived from the residual signal.
Codebook excited linear prediction (CELP). The excitation signal is selected by a collec-
tion of possible waveforms stored in a table.
We will now analyze in detail some coding schemes. For further study we refer the
reader to [3, 6].
Vocoder or LPC
The general scheme for the conventional LPC, known also as the LPC vocoder, is illustrated
in Figure 5.77. At the encoder, the signal is classified as voiced or unvoiced, and the LCP
parameters are extracted, together with the pitch period P for the voiced case; however,
5.7. Coding by modeling 415
the prediction residual error is not transmitted. At the decoder, for the voiced case a train
of impulses with period P is produced, whereas for the unvoiced case white noise is
produced. The excitation is then filtered by the AR filter to generate the reconstruction
signal.
In an early LPC scheme for military radio applications (LPC-10), the input signal sampled
at 8 kHz is segmented into blocks of 180 samples. For the analysis of the LPC parameters
the covariance method is used; overall 54 bits per block are needed with a bit rate of
2400 bit/s.
RPE coding
The RPE coding scheme, illustrated in Figure 5.78a, is a particular case of residual excited
LP (RELP) coding in which the excitation is obtained by downsampling the prediction
residual error by a factor of 3, as shown in Figure 5.78b; the excitation sequence is then
quantized using a 3-bit adaptive non-uniform quantizer. The choice of the best of the
three subsequences (actually four are used in practice) is made by the analysis-by-synthesis
(ABS) approach, where all the excitations are tried: the best is that which produces the
output “closest” to the original signal.
The standard ETSI for GSM (06.10) includes also a long-term predictor as the one of
Figure 5.64. The bit rate is 13 kbit/s, with a latency lower than 80 ms, operating with blocks
of 160 samples.
416 Chapter 5. Digital representation of waveforms
CELP coding
As shown in Figure 5.79, the excitations belong to a codebook obtained in a “random”
way, or by vector quantization (see Section 5.8) of the residual signal. The choice of
the excitation (index of the codebook) is made by the ABS approach, trying to minimize
the output of the weighting filter; also in this case the predictor includes a long term
component.
5.8. Vector quantization (VQ) 417
Multipulse coding
It is similar to CELP coding, with the difference that the minimization procedure is used
to determine the position and amplitude of a specific number of impulses. The analysis
procedure is less complex than that of the CELP scheme.
reproduction vector of A and decides for the vector Qi of the codebook A that minimizes
it; the decoder associates the vector Qi to the index i received. We note that the information
transmitted over the digital channel identifies the code vector Qi : therefore it depends only
on the codebook size L and not on N , dimension of the code vectors.
An example of input vector s is obtained by considering N samples at a time of a
speech signal, s.k/ D [s.k N Tc /; : : : ; s..k N N C 1/Tc /]T , or the N LPC coefficients,
s.k/ D [c1 .k/; : : : ; c N .k/]T , associated with an observation window of a signal.
5.8.1 Characterization of VQ
Considering the general case of complex-valued signals, a vector quantizer is character-
ized by
ž Source or input vector s D [s1 ; s2 ; : : : ; s N ]T 2 C N .
ž Codebook A D fQi g, i D 1; : : : ; L, where Qi 2 C N is a code vector.
ž Distortion measure d.s; Qi /.
ž Quantization rule (minimum distortion)
Q:C N
! A with Qi D Q[s] if i D arg min d.s; Q` / (5.225)
`
R4 R1
C2
1
0 Q4
00
11
Q 1
0Q
1 3 00
11
Q 2
R2
R3
Figure 5.81. Partition of the source space C 2 in four subsets or Voronoi regions.
Rq log2 L
RI D D (bit/sample) (5.230)
N N
ž Rate in bit/s
RI log2 L
Rb D D (bit/s) (5.231)
Tc N Tc
where Tc denotes the time interval between two consecutive samples of a vector. In
other words, in (5.231) N Tc is the sampling period of the vector sequence fs.k/g.
ž Distortion
d.s; Qi / (5.232)
d:C N
ð A ! <C (5.233)
If the input process s.k/ is stationary and the probability density function ps .a/ is known,
we can compute the mean distortion as
where
Z
d.a; Q` / ps .a/ da
R`
dN` D Z (5.237)
ps .a/ da
R`
1 XK
D.R; A/ D lim d.s.k/; Q[s.k/]/ (5.238)
K !1 K kD1
In practice we always assume that the process fsg is stationary and ergodic, and we use the
average distortion (5.238) as an estimate of the expectation (5.234).
Defining
Qi D [Q i;1 ; Q i;2 ; : : : ; Q i;N ]T (5.239)
we give below two measures of distortion of particular interest.
1. Distortion as the `¹ norm to the ¼-th power:
" #¼=¹
X
N
¼ ¹
d.s; Qi / D jjs Qi jj¹ D jsn Q i;n j (5.240)
nD1
2. Itakura–Saito distortion:
X
N X
N
d.s; Qi / D .s Qi / H Rs .s Qi / D .sn Q i;n /Ł [Rs ]n;m .sm Q i;m /
nD1 mD1
(5.242)
where Rs is the autocorrelation matrix of the vector sŁ .k/, defined in (1.346), with
elements [Rs ]n;m , n; m D 1; 2; : : : ; N .
11 Although the same symbol is used, the metric defined by (5.241) is the square of the Euclidean distance (1.38).
5.8. Vector quantization (VQ) 421
where
ž F.N / is the space filling gain. In the scalar case the partition regions must necessarily
be intervals. In an N dimensional space, Ri can be “shaped” very closely to a sphere.
The asymptotic value for N ! 1 equals F.1/ D 2³ e=12 D 1:4233 D 1:53 dB.
ž S.N / is the gain related to the shape of ps .a/, defined as12
jj pQ s .a/jj1=3
S.N / D (5.245)
jj pQ s .a/jj N =.N C2/
where pQ s .a/ is the probability density function of the input s considered with uncor-
related components. S.N / does not depend on the variance of the random variables
of s, but only on the norm order N =.N C 2/ and shape pQ s .a/. For N ! 1, we
obtain
jj pQs .a/jj1=3
S.1/ D (5.246)
jj pQs .a/jj1
Z Z ½¼=¹
jj pQ s .a/jj¼=¹ D ÐÐÐ pQ s¹ .a1 ; : : : ; a N / da1 : : : da N (5.244)
422 Chapter 5. Digital representation of waveforms
Rule B (Optimum codebook). Assuming the partition is R given, we want to find the
optimum codebook A. By minimizing (5.236) we obtain the solution
2. Using the rule A we determine the optimum partition R[A j ] using the codebook A j .
3. We evaluate the distortion associated with the choice of A j and R[A j ] using (5.236).
4. If
D j1 D j
<ž (5.252)
Dj
5. Using rule B we evaluate the optimum codebook associated to the partition R[A j1 ].
6. We go back to step 2.
424 Chapter 5. Digital representation of waveforms
The solution found is at least locally optimum; nevertheless, given that the number of
locally optimum codes can be rather large, and some of the locally optimum codes may
give rather poor performance, it is often advantageous to provide a good codebook to the
algorithm to start with, as well as trying different initial codebooks.
The algorithm is clearly a generalization of the Lloyd algorithm given in Section 5.3.2:
the only difference is that the vector version begins with a codebook (alphabet) rather than
with an initial partition of the input space. However, the implementation of this algorithm
is difficult for the following reasons.
ž The algorithm assumes that ps .a/ is known. In the scalar quantization it is possible,
in many applications, to develop an appropriate model of ps .a/, but this becomes
a more difficult problem with the increase of the number of dimensions N : in fact,
the identification of the distribution type, for example, Gaussian or Laplacian, is no
longer sufficient, as we also need to characterize the statistical dependence among
the elements of the source vector.
ž The computation of the input space partition is much harder for the VQ. In fact,
whereas in the scalar quantization the partition of the real axis is completely specified
by a set of .L 1/ points, in the two-dimensional case the partition is specified by a
set of straight lines, and for the multi-dimensional case to find the optimum solution
becomes very hard. For VQ with a large number of dimensions, the partition becomes
also harder to describe geometrically.
ž Also in the particular case (5.251), the calculation of the centroid is difficult for the
VQ, because it requires evaluating a multiple integral on the region Ri .
fs.m/g m D 1; : : : ; K (5.253)
1 XK
DD d.s.k/; Q[s.k/]/ (5.254)
K kD1
Rule A
Rule B
1 X
Qi D arg min d.s.k/; Q j / (5.256)
Q j 2C N m i s.k/2R
i
X
N
d.s.k/; Qi / D jsn .k/ Q i;n j2 (5.257)
nD1
Rule B
1 X
Qi D s.k/ (5.258)
m i s.k/2R
i
that is Qi coincides with the arithmetic mean of the TS vectors that are inside Ri .
where
ε D 0 (5.262)
and
r
1 Ms
εC D Ð1 (5.263)
10 N
so that
the largest contribution to the distortion; then we compute the new partition and proceed
in the usual way.
We give some practical rules, taken from [3] for LPC applications,13 that can be useful
in the design of a vector quantizer.
ž If K =L 30, there is a possibility of empty regions, where we recall K is the number
of vectors of the TS, and L is the number of code vectors.
ž If K =L 600, an appreciable difference between the distortion calculated with the
TS and that calculated with a new sequence may exist.
In the latter situation, it may in fact happen that, for a very short TS, the distortion computed
for vectors of the TS is very small; the extreme case is obtained by setting K D L, hence
D D 0. In this situation, for a sequence different from the TS (outside TS) the distortion is
13 These rules were derived in the VQ of LPC vectors. They can be considered valid in the case of strongly
correlated vectors.
428 Chapter 5. Digital representation of waveforms
Figure 5.86. Values of the distortion as a function of the number of vectors K in the inside
and outside training sequences.
in general very high. As illustrated in Figure 5.86, only if K is large enough, does the TS
adequately represent the input process and no substantial difference appears between the
distortion measured with vectors inside or outside TS [10].14
Finally we find that the LBG algorithm, even though very simple, requires numerous
computations. We consider, for example, as vector source the LPC coefficients with N D 10,
computed over windows of duration equal to 20 ms of a speech signal sampled at 8 kHz.
Taking L D 256 we have a rate Rb D 8 bit/20 ms equal to 400 bit/s. As a matter of fact, the
14 This situation is similar to that obtained by the LS method (see Section 3.2).
5.8. Vector quantization (VQ) 429
LBG algorithm requires a minimum K D 600 Ð 256 155000 vectors for the TS, which
roughly corresponds to three minutes of speech.
5.8.4 Variants of VQ
Tree search VQ
A random VQ, determined according to the LBG algorithm, requires:
ž a large memory to store the codebook;
ž a large computational complexity to evaluate the L distances for encoding.
A variant of VQ that requires a lower computational complexity, at the expense of a larger
memory, is the tree search VQ. As illustrated in Figure 5.87, whereas in the memoryless
VQ case the comparison of the input vector s must occur with all the elements of the
codebook, thus determining a full search, in the tree search VQ we proceed by levels: first,
we compare s with Q A1 and Q A2 , then we proceed along the branch whose node has a
representative vector “closest” to s.
To determine the code vectors at different nodes, for a binary tree the procedure consists
of the following steps.
1. Calculate the optimum quantizer for the first level by the LBG algorithm; the code-
book contains 2 code vectors.
2. Divide the training sequence into subsequences relative to every node of level n
(n D 2; 3; : : : ; N L E V , N L E V D log2 L); in other words, collect all vectors that are
associated with the same code vector.
3. Apply the LGB algorithm to every subsequence to calculate the codebook of level n.
full search 2 Rq 2 Rq
P Rq i
tree search 2Rq i D1 2 ' 2 Rq C1
As an example, the memory requirements and the number of computations of d.Ð; Ð/ are
shown in Table 5.15 for a given value of Rq (bit/vector) in the cases of full search and
tree search. Although the performance is slightly lower, the computational complexity of
the encoding scheme for a tree search is considerably reduced.
Multistage VQ
The multistage VQ technique presents the advantage of reducing both the encoder computa-
tional complexity and the memory required. The idea consists in dividing the encoding pro-
cedure into successive stages, where the first stage performs quantization with a codebook
with a reduced number of elements. Successively, the second stage performs quantization
of the error vector e D s Q[s]: the quantized error gives a more accurate representation
of the input vector. A third stage could be used to quantize the error of the second stage
and so on.
We compare the complexity of a one-stage scheme with that of a two-stage scheme,
illustrated in Figure 5.88. Let Rq D log2 L be the rate in bit/vector for both systems and
assume that all the code vectors have the same dimension N D N1 D N2 .
ž Two-stage:
Rq D log2 L 1 C log2 L 2 , hence L 1 L 2 D L.
Computations of d.Ð; Ð/ for encoding: L 1 C L 2 .
Memory: L 1 C L 2 locations.
ž One-stage:
Rq D log2 L.
Computations of d.Ð; Ð/ for encoding: L.
Memory: L locations.
The advantage of a multistage approach in terms of cost of implementation is evident,
however, it has lower performance than a one-stage VQ.
Product code VQ
The input vector is split into subvectors that are quantized independently, as illustrated in
Figure 5.89.
5.8. Vector quantization (VQ) 431
This technique is useful if a) there are input vector components that can be encoded
separately because of their different effects, e.g., prediction gain and LPC coefficients, or
b) the input vector has too large a dimension to be encoded directly.
It presents the disadvantage that it does not consider the correlation that may exist
between the various subvectors, that could bring about a greater coding efficiency.
A more general approach is the sequential search product code, [7], where the quanti-
zation of the subvector n depends also on the quantization of previous subvectors.
With reference to Figure 5.89, assuming L D L 1 L 2 and N D N1 C N2 , we note that the
rate per dimension for the VQ is given by
log2 L log2 L 1 log2 L 2
Rq D D C (5.265)
N N N
whereas for the product code VQ it is given by
log2 L 1 log2 N2
Rq D C (5.266)
N1 N2
Standard Description
1 G.711 PCM at 64 kbit/s
2 G.721 ADPCM at 32 kbit/s
3 G.723 ADPCM at 24 and 40 kbit/s
4 G.726 G.723+G.721
5 G.727 embedded ADPCM at 40, 32, 24 and 16 kbit/s
(“embedded” means that a code also includes
those of lower rate)
6 G.728 LD-CELP at 16 kbit/s (LD stands for low delay)
7 G.729 CS-ACELP at 8 kbit/s
8 G.729 Annex A CS-ACELP at 8 kbit/s with reduced complexity
9 G.723.1 MPC-MLQ at 5.3 and 6.4 kbit/s
10 G.722 SBC+ADPCM for wide band speech at 64, 56
and 48 kbit/s . A SBC scheme having two bands,
0 ł 4 kHz and 4 ł 8 kHz is used; in each band
there is a G.721 encoder. Bit allocation in the
two bands is dynamic, for example, 5+3 or 6+2
11 IS-54 (TIA) VSELP at 7.95 kbit/s (VSELP stands for vector
sum excited linear prediction)
12 FS-1015 (LPC-10E) LPC at 2.4 kbit/s
13 FS-1016 CELP at 4.8 kbit/s
14 GSM-FR RPE-LTP at 13 kbit/s (LTP stands for long-term
prediction); there is also a 5.6 kbit/s version
15 MPEG1, Layer I SBC at 192 kbit/s per audio channel (stereo)
[generally 32 ł 448 kbit/s total]
16 MPEG1, Layer II SBC at 128 kbit/s per audio channel [generally
32 ł 384 kbit/s total]
17 MPEG1, Layer III SBC+MDCT+Huffman coding at 96 kbit/s per
audio channel [generally 32 ł 320 kbit/s total]
18 MPEG2, AAC SBC+MDCT coding at 64 kbit/s per audio chan-
nel
5. Bibliography 435
Bibliography
[3] D. Sereno and P. Valocchi, Codifica numerica del segnale audio. L’Aquila: Scuola
Superiore G. Reiss Romoli, 1996.
[4] N. S. Jayant and P. Noll, Digital coding of waveforms. Englewood Cliffs, NJ: Prentice-
Hall, 1984.
436 Chapter 5. Digital representation of waveforms
[5] B. S. Atal and J. R. Remde, “A new model of LPC excitation for producing natural-
sounding speech at low bit rates”, in Proc. ICASSP, pp. 614–617, 1982.
[6] B. S. Atal, V. Cuperman, and A. Gersho, eds, Advances in speech coding. Boston,
MA: Kluwer Academic Publishers, 1991.
[7] A. Gersho and R. M. Gray, Vector quantization and signal compression. Boston, MA:
Kluwer Academic Publishers, 1992.
[8] R. M. Gray, “Vector quantization”, IEEE ASSP Magazine, vol. 1, pp. 4–29, Apr. 1984.
[9] T. D. Lookabaugh and R. M. Gray, “High–resolution quantization theory and the
vector quantizer advantage”, IEEE Trans. on Information Theory, vol. 35, pp. 1020–
1033, Sept. 1989.
[10] Y. Linde, A. Buzo, and R. M. Gray, “An algorithm for vector quantizer design”, IEEE
Trans. on Communications, vol. 28, pp. 84–95, Jan. 1980.
[11] IEEE Communication Magazine, Sept. 1997. vol. 35.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 6
Modulation theory
The term modulation indicates the process of translating the information generated by a
source into a signal that is suitable for transmission over a physical channel. In the case of
digital transmission, the information is represented by a sequence of binary data (bits) gen-
erated by the source, or by a digital encoder of analog signals (see Chapter 5). The mapping
of a digital sequence to a signal is called digital modulation, and the device that performs
the mapping is called digital modulator. A modulator may employ a set of M D 2 wave-
forms to generate a signal (binary modulation), or, in general, M ½ 2 waveforms (M-ary
modulation). The transmission medium determines the channel characteristics, as discussed
in Chapter 4; it may consist for example of a twisted-pair cable, a coaxial cable, an optical
fiber, an infrared link, a radio link, or a combination of them; in any case the channel
modifies the transmitted waveform by introducing for example distortion, interference, and
noise. In this chapter we assume that the channel introduces only additive white Gaus-
sian noise (AWGN). We postpone the study of other effects to the next chapters; only in
Section 6.12 we will give some simple results for a channel affected by flat fading.
The task of the receiver is to detect which signal was transmitted, based on the received
signal. Using the vector representation of signals discussed in Section 1.2, in this chapter we
will introduce the optimum receiver, referring to the detection theory, and present a survey of
the main modulation techniques, e.g., PAM, PSK, QAM, orthogonal, and biorthogonal. The
performance of each modulation-demodulation method is evaluated with reference to the
bit error probability, and a comparison of the various methods is given in terms of spectral
efficiency and required transmission bandwidth. The transmission rates achievable by the
various modulation methods over a specific channel for a given target error probability are
then compared with the Shannon bound, which indicates the maximum rate that can be
achieved for reliable transmission.
with values in f1; : : : ; Mg, and represents the index, or symbol, of the transmitted signal.
Assuming that the waveform with index m is transmitted, that is a0 D m, the received, or
observed, signal is given by
The receiver, based on r.t/, must decide which among the M hypotheses
is the most probable, and correspondingly must select the detected value aO 0 [1]. The theory
exposed in this section can be immediately extended to the case of complex-valued signals.
We represent time-continuous signals using the vector notation introduced in Section 1.2.
Let fi .t/g, i D 1; : : : ; I , be a complete basis for the M signals fsm .t/g, m D 1; : : : ; M;
let sm be the vector representation of sm .t/, t 2 <,
where
Z C1
smi D hsm ; i i D sm .t/iŁ .t/ dt m D 1; : : : ; M i D 1; : : : ; I (6.4)
1
Recall that the set fsm g, m D 1; : : : ; M, is also called the system constellation.
The basis of I functions may be incomplete for the representation of the noise signal w.
In any case, we express the noise as
where
X
I
w .t/ D wi i .t/ (6.6)
i D1
and where
Z C1
wi D hw; i i D w.t/iŁ .t/ dt (6.7)
1
6.1. Theory of optimum detection 439
In other words, w is the component of w that lies in the space spanned by the basis
fi .t/g, t 2 <, i D 1; : : : ; I , whereas w? is the error due to this representation. Since
is orthogonal to fi .t/g, i D 1; : : : ; I , and hence also to w , we can say that w? lies outside
of the desired signal space and, as we will state later using the theorem of irrelevance, it
can be ignored because it is irrelevant for the detection. The vector representation of the
component of the noise signal that lies in the span of fi .t/g is given by
w D [w1 ; : : : ; w I ]T D w (6.9)
ZZ
N0
D Ž.t1 t2 /iŁ .t1 / j .t2 / dt1 dt2
2
Z (6.12)
N0
D iŁ .t/ j .t/ dt
2
N0
D Ži j i; j D 1; : : : ; I
2
because of the orthogonality of the basis fi .t/g.
Hence the components fwi g are uncorrelated. As fwi g, i D 1; : : : ; I , are jointly Gaussian
uncorrelated random variables with zero mean, then they are statistically independent with
equal variance given by
N0
¦ I2 D (6.13)
2
440 Chapter 6. Modulation theory
Sufficient statistics
Defining
r D [r1 ; : : : ; r I ]T with ri D hr; i i (6.14)
the components of the vector r are called sufficient statistics 1 to decide among the M
hypotheses. Therefore we get the formulation equivalent to (6.2),
Hn : r D sn C w n D 1; : : : ; M (6.15)
From the above results, the probability density function of r, under the hypothesis that
waveform n is transmitted, is given by:2
1 1
prja0 .ρ j n/ D r !I exp jjρ s n jj2
ρ 2 <I (6.16)
N0 N 0
2³
2
Decision criterion
We
S Msubdivide the space < I of the received signal r into M non-overlapping regions Rn
I
( nD1 Rn D < and Rn \ Rm D ; for n 6D m). Then we adopt the following decision rule:
choose Hn (and aO 0 D n) if r 2 Rn (6.17)
The choice of M regions is made so that the probability of a correct decision is maximum.
Let pn D P[a0 D n] be the transmission probability of the waveform n, or a priori
probability. Recalling the total probability theorem, the probability of correct decision is
given by
P[C] D P[aO 0 D a0 ]
X
M
D P[aO 0 D n j a0 D n]P[a0 D n]
nD1
X
M (6.18)
D pn P[r 2 Rn j a0 D n]
nD1
M Z
X
D pn prja0 .ρ j n/ dρ
nD1 Rn
1 Given a desired signal corrupted by noise, in general the notion of sufficient statistics applies to any signal, or
sequence of samples, that allows the optimum detection of the desired signal. In other words, no information
is lost in considering a set of sufficient statistics instead of the received signal.
A particular case is represented by transformations that allow reconstruction of a signal using the basis
identified by the desired signal. For example, considering the basis feš j2³ f t ; t 2 <; f 2 Bg to represent a
real-valued signal with passband B in the presence of additive noise, that is the Fourier transform of the noisy
signal filtered by an ideal filter with passband B, we are able to reconstruct the noisy signal within the passband
of the desired signal; therefore the noisy signal filtered by a filter with passband B is a sufficient statistic.
2 Here we use the formulation (1.377) for real-valued signals; we would get the same results using the formulation
(1.380) for complex-valued signals.
6.1. Theory of optimum detection 441
The integrand function consists of M terms but, being the M regions non-overlapping, for
each value of ρ only one of the terms is different from zero. Therefore the maximum value
of the integrand function for each value of ρ, and hence of the integral, is achieved if for
each value of ρ we select among M terms the term that yields the maximum value of
pn prja0 .ρ j n/. Thus we have the following decision criterion.3
P[a0 D n j r D ρ]
prja0 .ρ j n/ D pr .ρ/ (6.23)
pn
In other words, given that we observe ρ, the signal detected by (6.24) has the largest prob-
ability of having been transmitted. The probabilities P[a0 D n j r D ρ ]; n D 1; : : : ; M,
are the a posteriori probabilities.
We give a simple example of application of the MAP criterion for I D 1 and M D 3.
Let the function pn pr ja0 .² j n/, n D 1; 2; 3, be given as shown in Figure 6.2. If we
indicate with −1 , −2 , and −3 the intersection points of the various functions as illustrated in
Figure 6.2, it is easy to verify that
p1 pr|a ( ρ |1)
p2 pr|a ( ρ |2)
p3 pr|a ( ρ |3)
0
0
0
τ τ τ ρ
1 2 3
Maximum likelihood (ML) criterion. If the signals are equally likely a priori, i.e. pn D
1=M, 8n, the criterion (6.22) becomes
The ML criterion leads to choosing that value of n for which the conditional probability
that r D ρ is observed given a0 D n is maximum.
In some texts the ML criterion is formulated via the definition of the likelihood ratios:
prja0 .ρ j n/
Ln .ρ/ D n D 1; 2; : : : ; M (6.27)
prja0 .ρ j 1/
In this case the ML criterion becomes
Hence the ML criterion coincides with the minimum distance criterion: “decide for the
signal vector sm , which is closest to the received signal vector ρ”. Moreover, the decision
regions fRn g, n D 1; : : : ; M, are easily determined.
An example is given in Figure 6.3 for the three signals of Example 1.2.2 on page 10.
Considering a pair of vectors si ; s j , we draw the straight line of points that are equidistant
from si and s j : this straight line defines the boundary between Ri and R j . The procedure
then is repeated for every pair of vectors. The decision region associated with each vector
sn is given by the intersection of two half-planes as illustrated in Figure 6.3.
Theorem of irrelevance
With regard to the decision process, we introduce a theorem that formalizes the distinction
previously mentioned between relevant and irrelevant components of the received signal.
6.1. Theory of optimum detection 443
φ2
s3 s 2 R2
A
T
2
R3
s1
0 R1 φ1
A
T
2
Figure 6.3. Construction of decision regions for the constellation of the Example 1.2.2.
Let us assume that the signal vector r can be split into two parts, r D [r1 ; r2 ]. Then,
under the hypothesis a0 D n,
prja0 .ρ j n/ D pr1 ;r2 ja0 .ρ 1 ; ρ 2 j n/ (6.31)
which, recalling the definition of conditional probability, can be rewritten as
pr1 ;r2 ja0 .ρ 1 ; ρ 2 j n/ D pr2 jr1 ;a0 .ρ 2 j ρ 1 ; n/ pr1 ja0 .ρ 1 j n/ (6.32)
Substitution of (6.32) into (6.22) leads to the following result.
Theorem 6.1
If pr2 jr1 ;a0 .ρ 2 j ρ 1 ; n/ does not depend on the particular value n assumed by a0 , that is if
pr2 jr1 ;a0 .ρ 2 j ρ 1 ; n/ D pr2 jr1 .ρ 2 j ρ 1 / (6.33)
then the optimum receiver can disregard the component r2 and base its decision only on the
component r1 .
Corollary 6.1
A sufficient condition to disregard r2 is that
pr2 jr1 ;a0 .ρ 2 j ρ 1 ; n/ D pr2 .ρ 2 / (6.34)
Example 6.1.1
The system (6.2) is represented using a larger basis, as illustrated in Figure 6.4, where the
noise (6.5) has two components
w1 D w w2 D w? (6.35)
444 Chapter 6. Modulation theory
s1 w1
s
2
r = w +s
1 1 n
s
M r =w
2 2
w2
We note that the received signal vector r2 coincides with the noise vector w2 that is
statistically independent of w1 and sn . Therefore we have
Example 6.1.2
In the system shown in Figure 6.5, the noise vectors w1 and w2 are statistically independent.
As r2 D r1 C w2 , if r1 is known, then r2 depends only on the noise w2 , that is independent
of the particular sn transmitted. Then
Example 6.1.3
As in the previous example, the noise vectors w1 and w2 in Figure 6.6 are statistically
independent. Under this condition, however, r2 cannot be disregarded by the optimum
receiver: in fact, from r2 D w2 C w1 and w1 D r1 sn , we get
s1 w1 w
2
s
2
r = w + r = w + w + sn
2 2 1 2 1
s
M
r = w +s
1 1 n
w1 w
2
r = w2 + w1
s1 2
s
2 r = w +s
1 1 n
s
M
Example 6.1.4
In Figure 6.7, if the noise vectors w1 and w2 are statistically independent, observing (6.34)
the signal r2 can be neglected by the optimum receiver. In fact, it is:
pr2 jr1 ;a0 .ρ 2 j ρ 1 ; n/ D pw2 jw1 ;a0 .ρ 2 j ρ 1 sn ; n/ D pw2 .ρ 2 / (6.39)
that does not depend on n.
We note that Example 6.1.1 is a particular case of Example 6.1.4.
Implementation type 1. As illustrated in Figure 6.8, there are two fundamental blocks: the
first determines the I components of the vector r, and the second computes the M distances
Dn D jjr sn jj2 n D 1; : : : ; M (6.40)
We note that the filter on branch i has impulse response given by rect..t t0 =2/=t0 /, and
yields the output
Z t
yi .t/ D r.− /iŁ .− / d− (6.41)
tt0
φ *(t)
i t0
t0
r(t) t ri
r(t) ri
(.) d τ φ * (t0 -t)
i
t-t 0
(a) (b)
Figure 6.9. (a) Correlation demodulator and equivalent (b) matched filter (MF) demodulator.
The implementation of (6.44) is illustrated in Figure 6.10, whereas the equivalent criterion
(6.43) is also given in Figure 6.8.
6.1. Theory of optimum detection 447
E1
t0 -
2
U1
s*
1 (t0 -t) Re[.]
E2
t0 -
2
U2
s*
2 (t0 -t) Re[.]
r(t) a^ 0
a^ 0=arg max Un
n
EM
t0 -
2
UM
s*
M (t0 -t) Re[.]
Error probability
In general, the error probability of the system is defined as
where P[C] is given by (6.18). Using the total probability theorem, we express the error
probability as
X
M
Pe D = Rn j a0 D n]
pn P[r 2 (6.47)
nD1
where
Z C1
Ei D jsi .t/j2 dt i D 1; 2 (6.49)
1
448 Chapter 6. Modulation theory
and
Z C1
s1 .t/s2Ł .t/ dt
1
²D p (6.50)
E1 E2
In the case of two equally likely signals, equation (6.47) becomes
= R1 j a0 D 1] C P[r 2
Pe D 12 fP[r 2 = R2 j a0 D 2]g (6.51)
We assume that s2 is transmitted, which means a0 D 2. Given the received signal vector r
as in Figure 6.11, we get a decision error if the noise w D r s2 has a projection on the
line joining s1 and s2 that is smaller than d=2. As for all projections on an orthonormal
basis, the noise component w2 is Gaussian with zero mean and variance
N0
¦ I2 D (6.52)
2
Then the conditional error probability is given by
½
d d
P[r 2= R2 j a0 D 2] D P w2 < DQ (6.53)
2 2¦ I
where
Z C1 b2
1
Q.a/ D p e 2 db (6.54)
a 2³
is the Gaussian complementary distribution function whose values are reported in
Appendix 6.A. Likewise, it is
½
d d
= R1 j a0 D 1] D P w2 >
P[r 2 DQ (6.55)
2 2¦ I
From (6.51), we obtain
d
Pe D Q (6.56)
2¦ I
6.1. Theory of optimum detection 449
Observation 6.1
The error probability depends only on the ratio between the distance of the signals of the
constellation at the decision point and the standard deviation per dimension of the noise.
For this reason it is useful to define the following signal-to-noise ratio at the decision point
2
d
D (6.57)
2¦ I
If E 1 D E 2 D E s , we get
s !
E s .1 ²/
Pe D Q (6.59)
N0
s.t/
.t/ D p (6.62)
Es
s2 s1
- E 0 Es φ
s
d=2 Es
T
r(t) r r>0 , ^a =1 a^ 0
φ* (T-t) 0
r<0 , ^a =2
0
A modulation technique with antipodal signals is binary phase shift keying (2-PSK or
BPSK), where s.t/, defined by (6.60), is shown in Figure 6.14. In this case
s1 .t/ D A cos.2³ f 0 t C '0 / 0<t <T (6.65)
s2 .t/ D A cos.2³ f 0 t C '0 C ³ / D s1 .t/ 0<t <T (6.66)
Orthogonal signals (ρ = 0)
We consider the two signals
s1 .t/ D A cos.2³ f 0 t/ s2 .t/ D A sin.2³ f 0 t/ 0<t <T (6.67)
Observing (1.71), if f 0 D k=2T , k integer, or else f 0 × 1=T , then
A2 T
Es D E1 D E2 D and ² ' 0 (6.68)
2
A basis is composed of the signals themselves
r
s1 .t/ 2
1 .t/ D p D cos.2³ f 0 t/ 0<t <T (6.69)
Es T
and
r
s2 .t/ 2
2 .t/ D p D sin .2³ f 0 t/ 0<t <T (6.70)
Es T
We note that 1 .t/ and 2 .t/ are “windowed” versions of sinusoidal signals.
6.1. Theory of optimum detection 451
A/2
s(t)
−A/2
−A
0 T
t
Figure 6.14. Plot of s.t/ D A cos.2³ f0 t C '0 /, 0 < t < T, for f0 D 2=T and '0 D ³=2.
φ2
s2
Es
2E s
s1
0 Es φ1
The vector representationpis given in Figure 6.15. As the two vectors p s1 and s2 are
orthogonal, their distance is 2Es . This distance is reduced by a factor of 2 as compared
to the case of antipodal signals with the same value of Es . The optimum ML receiver is
depicted in Figure 6.16.
As ² D 0, (6.59) becomes
s !
Es
Pe D Q (6.71)
N0
452 Chapter 6. Modulation theory
T U
1
s* (T-t) U >U , ^a =1
1
r(t) 1 2 0 a^ 0
T U
2
U <U , ^a =2
1 2 0
s* (T-t)
2
Figure 6.17. Error probability as a function of Es =N0 for binary antipodal and orthogonal
signalling.
From the curves of probability of error versus E s =N0 plotted in Figure 6.17, we note that
for a given Pe we have a loss of 3 dB in E s =N0 for the orthogonal signalling scheme as
compared to the antipodal scheme.
We examine in detail another binary orthogonal signalling scheme.
Binary FSK
We consider the two signals of Figure 6.18, given by
A
s (t)
0
1
−A
0 T
t
A
s (t)
0
2
−A
0 T
t
Figure 6.18. Binary FSK signals with f0 D 2=T, fd D 0:3=T and '0 D 0.
We have two “windowed” sinusoidal functions, one with frequency f 0 f d and the
other with frequency f 0 C f d ; f 0 is called the carrier frequency, and f d is the frequency
deviation. Also in this case, if f 0 š f d × 1=T , it holds
A2 T
Es D E1 D E2 ' (6.73)
2
and
Z C1
s1 .t/s2 .t/ dt
1
²D D sinc.4 f d T / (6.74)
A2 T
2
As FSK is a binary modulation, we have
s !
E s .1 ²/
Pe D Q (6.75)
N0
Introducing the modulation index h as the ratio between the frequency deviation f d and
the Nyquist frequency of the transmission system equal to 1=.2T /, we have
fd
hD D 2 fd T (6.76)
1=.2T /
Therefore ² D sinc.2h/.
454 Chapter 6. Modulation theory
0.8
0.6
0.4
ρ
0.2
−0.2
−0.4
0 0.5 1 1.5 2 2.5 3 3.5 4
h
From the plot of ² as a function of h illustrated in Figurep 6.19, we get the minimum
value of ², ²min D 0:22, for h D 0:715 and Pe D Q. 1:22E s =N0 /, with a gain of
0.7 dB in E s =N0 as compared to the case ² D 0, and a loss of 2.3 dB with respect to
antipodal signalling.
From (6.74) we have ² D 0 for h D 1=2, that is for f d D 1=.4T /: in this case we speak of
minimum shift keying (MSK).4 There are other values of h (1; 1:5; 2; : : : ) that yield ² D 0,
however, they imply larger f d , with the consequent requirement of larger channel bandwidth.
Upper bound
We assume sm is transmitted. An error event that leads to choosing sn is expressed as
Enm D fr : d.r; sn / < d.r; sm /g (6.79)
4 In fact, MSK also requires that the phase of the modulated signal be continuous (see Section 18.5).
6.1. Theory of optimum detection 455
Since dmin dnm , then Q.dnm =.2¦ I // Q.dmin =.2¦ I //, and a looser bound than (6.83)
is given by
dmin
Pe .M 1/Q (6.84)
2¦ I
Lower bound
Given sn , let dmin;n be the distance of sn from the nearest signal. Limiting the evaluation
of the error probability to error events that are associated to the nearest signals, we obtain
the following lower bound
1 X M
dmin;n
Pe ½ Q (6.85)
M nD1 2¦ I
We get a looser bound by introducing Nmin , the number of signals fsn .t/g whose distance
is dmin from the nearest signal: 2 Nmin M. Limiting the equation in (6.85) to such
signals, we have
Nmin dmin
Pe ½ Q (6.86)
M 2¦ I
In other words, given sm , an error event E is reduced to considering only a signal at
minimum distance, if there is one. In the particular case of dmin;n D dmin , we have Nmin D
M, and Pe ½ Q.dmin =.2¦ I //.
For example, for the constellation of Figure 6.3 with M D 3 we have
r
Ap T Ap
d12 D T d13 D A d23 D T (6.87)
2 2 2
p
Then dmin D A T =2 and Nmin D 3.
456 Chapter 6. Modulation theory
ak ! sak .t kT / (6.88)
X
C1
s.t/ D sak .t kT / (6.89)
kD1
which is sent over the transmission channel. Let sCh .t/ be the signal at the output of the
transmission channel, which is assumed to introduce additive white Gaussian noise w.t/
with PSD N0 =2.
Note that the system of Figure 6.1 has been investigated assuming that an isolated
waveform is transmitted. In the model of Figure 6.20, the transmission of a waveform is
repeated every symbol period T . However, with reference to Figure 6.8, assuming that the
transmitted waveforms do not give rise to intersymbol interference (ISI) at the demodulator
5 Note that there are systems in which encoder and modulator are jointly considered, see, for example, Chapter 12.
In that case the notion of binary channel cannot be referred to the transmission of the sequence fcm g.
6.2. Simplified model of a transmission system and definition of binary channel 457
output6 we can still study the system assuming that an isolated symbol is transmitted, for
example, the symbol a0 transmitted at instant t D 0.
At the receiver, the bits fcQ` g are obtained by inverse bit mapping from the detected
message faO k g. The information bits fbO` g are then recovered by a decoding process.
Definition 6.1
The transformation that maps cQm into cm is called a binary channel. It is characterized by
the bit rate 1=Tcod , which is the transmission rate of the bits of the sequence fcm g, and by
the bit error probability
Pbit D PBC D P[cQm 6D cm ] cQm ; cm 2 f0; 1g (6.90)
In the case of a binary symmetric channel (BSC), it is assumed that P[cQm 6D cm j cm D
0] D P[cQm 6D cm j cm D 1]. We say that the BSC is memoryless if, for every choice of N
distinct instants m 1 ; m 2 ; : : : ; m N , the following relation holds:
P[cQm 1 6D cm 1 ; cQm 2 6D cm 2 ; : : : ; cQm N 6D cm N ]
(6.91)
D P[cQm 1 6D cm 1 ] P[cQm 2 6D cm 2 ] : : : P[cQm N 6D cm N ]
6 Absence of ISI in this context means that the optimum reception of the waveform transmitted at instant kT ,
sak .t kT /, is not influenced by the presence of the waveforms associated with symbols transmitted at other
instants. For example, all signalling schemes that employ pulses with finite duration in the interval .0; T / do
not give rise to ISI. However, this is a particular case of the Nyquist criterion for the absence of ISI that will
be discussed in Section 7.3.3.
458 Chapter 6. Modulation theory
.dec/
Typically, it is required Pbit ' 102 –103 for PCM or ADPCM coded speech (see
.dec/
Chapter 5) and Pbit ' 107 –1011 for data messages.
L b log2 M (6.94)
where the equality holds for a system without coding, or, with abuse of language, for an
uncoded system. In this case we also have
log2 M
RI D (6.95)
I
3. Symbol period :
T D Tb L b (6.96)
For the orthogonal and biorthogonal signals of Section 6.7, the definition of Bmin will be
different and will include the factor 1=M.
8. Spectral efficiency:
1=Tb Lb
¹D D (6.103)
Bmin Bmin T
In practice ¹ measures how many bits per unit of time are sent over a channel with the
conventional bandwidth Bmin . In terms of R I , from (6.93), we have
RI I
¹D (6.104)
Bmin T
MsCh E sCh
0D D (6.105)
.N0 =2/2Bmin N0 Bmin T
In general 0 expresses the ratio between the statistical power of the desired signal at
the receiver input and the statistical power of the noise measured with respect to the
conventional bandwidth Bmin . We note that, for the same value of N0 =2, if Bmin doubles,
the statistical power must also double to maintain a given ratio 0.
EI 2E sCh
0I D D (6.106)
N0 =2 N0 I
is the ratio between the energy per dimension of an isolated pulse E I and the noise variance
per dimension ¦ I2 given by (6.13). Using (6.99), the general relation becomes
Eb
0 I D 2R I (6.107)
N0
11. Link budget: if the receiver is matched to the transmission medium for the maximum
transfer of power, from (4.92) we obtain an alternative expression of (6.105) given by
PsCh
0D (6.108)
kTwi Bmin
We observe that (6.105) is useful to analyze the system, and (6.108) is usually employed
to evaluate the link budget.
In the next sections some examples of modulation systems without channel coding are
illustrated.
6.3. Pulse amplitude modulation (PAM) 461
Energy of sn :
Z C1
E n D Þn2 E h Eh D jh Tx .t/j2 dt (6.111)
1
1 XM
M2 1
Es D En D Eh (6.112)
M nD1 3
Basis function:
h Tx .t/
.t/ D p (6.113)
Eh
Vector representation:
p
sn D Þn Eh n D 1; : : : ; M (6.114)
as illustrated in Figure 6.22 for M D 8. The minimum distance is equal to
p
dmin D 2 E h D d (6.115)
The transmitter is shown in Figure 6.23. The bit mapper is composed of a serial-to-parallel
(S/P) converter followed by a map that translates a sequence of log2 M bits into the cor-
responding value of a0 . The map is a Gray encoder (see Appendix 6.B). An example for
M D 8 is illustrated in Table 6.1. The symbol Þn is input to an interpolator filter with
impulse response h Tx . The filter output yields the transmitted signal sa0 .
d=2 E h M=8
s1 s2 s3 s4 s5 s6 s7 s8
-7 E h -5 E h -3 E h - Eh Eh 3 Eh 5 Eh 7 Eh
0
000 001 011 010 110 111 101 100
bit-mapping
000 7 1
001 5 2
011 3 3
010 1 4
110 1 5
111 3 6
101 5 7
100 7 8
The type 1 implementation of the ML receiver is shown in Figure 6.24 and consists of
a matched filter to h Tx followed by a sampler. In this case, from (6.15) r is given by
r D sn C w (6.116)
where sn D Þn .d=2/, n D 1; : : : ; M, and w is a real-valued Gaussian r.v. with zero mean
and variance N0 =2.
From the observation of r, a threshold detector yields the detected symbol aO 0 . The
transmitted bits are then recovered by an inverse bit mapper.
Minimum bandwidth of the modulated signal, equal to the Nyquist frequency, see Defini-
tion 7.1 on page 559:
1
Bmin D (6.117)
2T
6.3. Pulse amplitude modulation (PAM) 463
Spectral efficiency:
.1=T / log2 M
¹D D 2 log2 M (bit/s/Hz) (6.118)
1=.2T /
Es
0D (6.119)
N0 =2
Note that (6.119) expresses 0 as the ratio between the signal energy and the variance of
the noise component: therefore, as I D 1, it follows that 0 I D 0.
p
Symbol error probability: from the total probability theorem, letting d D 2 E h , and
considering the outer constellation symbols separately from the others, we have
P[E j s M ] D P[E j s1 ]
D P[aO 0 6D 1 j a0 D 1]
½
M
D P r > 1 d j a0 D 1
2
½
d M
D P Þ1 C w > 1 d j a0 D 1
2 2
(6.120)
½
d M
D P .1 M/ C w > 1 d
2 2
½
d
DP w>
2
d
DQ
2¦ I
464 Chapter 6. Modulation theory
and
P[E j sn ] D P[aO 0 6D n j a0 D n] n D 2; : : : ; M 1
² ¦ ² ¦ ½
d d
D P r < .Þn 1/ [ r > .Þn C 1/ j a0 D n
2 2
½ ½
d d d d
D P Þn C w < .Þn 1/ C P Þn C w > .Þn C 1/
2 2 2 2 (6.121)
½ ½
d d
D P w< CP w>
2 2
d
D 2Q
2¦ I
where ¦ I2 D N0 =2. Then, for equally likely symbols we have
½
1 d d
Pe D 2Q C .M 2/2Q
M 2¦ I 2¦ I
(6.122)
1 d
D2 1 Q
M 2¦ I
In terms of E s we get8
s !
1 6 Es
Pe D 2 1 Q (6.123)
M M 2 1 N0
In terms of 0, substitution of (6.119) into (6.123) yields
r !
1 3
Pe D 2 1 Q 0 (6.124)
M M2 1
Assuming that Gray coding is adopted at the transmitter, the bit error probability is
given by
Pe
Pbit ' valid for 0 × 1 (6.125)
log2 M
Equation (6.125) expresses the fact that, for 0 sufficiently large, if an error event occurs, it
is very likely that one of the symbols at the minimum distance from the transmitted symbol
is detected. Thus with high probability only one bit of the log2 M bits associated with the
transmitted symbol is incorrectly recovered.
Curves of Pbit as a function of 0 are shown in Figure 6.25 for different values of M.
In the Appendix 6.C, two other baseband modulation schemes are described: pulse po-
sition modulation (PPM) and pulse duration modulation (PDM).
8 These results are valid also for continuous transmission with modulation period T , assuming absence of
ISI at the decision point; in other words, the autocorrelation function of h Tx .t/ must be a Nyquist pulse
(see Section 7.3.3).
6.4. Phase-shift keying (PSK) 465
−1
10
M=16
−2
10
M=8
−3
10
Pbit
M=4
−4
10
M=2
−5
10
−6
10
0 5 10 15 20 25 30 35
Γ=2E /N (dB)
s 0
that is, signals are obtained by choosing one of the M possible values of the phase of a
sinusoidal function with frequency f 0 , modulated by h Tx .10
In the following sections we will denote by k the r.v. that determines the transmitted
signal phase at instant kT . Consequently, the values of k are given by 'n , n D 1; : : : ; M.
In this section we consider the case of an isolated pulse transmitted at instant k D 0.
An alternative expression of (6.127) is given by
Moreover, setting
³
Þn D e j'n D e j M .2n1/ (6.129)
we have
where
³
Þn;I D Re[Þn ] D cos .2n 1/ (6.131)
M
³
Þn;Q D Im[Þn ] D sin .2n 1/ (6.132)
M
1 XM
Eh
Es D En D (6.134)
M nD1 2
Basis functions:
s
2
1 .t/ D h Tx .t/ cos.2³ f 0 t/ (6.135)
Eh
s
2
2 .t/ D h Tx .t/ sin.2³ f 0 t/ (6.136)
Eh
Vector representation:
r
Eh h ³ ³ iT
sn D cos .2n 1/ ; sin .2n 1/ n D 1; 2; : : : ; M (6.137)
2 M M
as illustrated in Figure 6.26 for M D 8. q
We note that the desired signal at the decision point, s n , aside from the factor E2h ,
coincides with Þn . Note that the signal constellation lies on a circle and the various vectors
differ in the phase 'n .
The minimum distance is equal to
p ³ p ³
dmin D 2 E s sin D 2E h sin (6.138)
M M
6.4. Phase-shift keying (PSK) 467
In Figure 6.26, the projections of sn on the axis 1 (in phase) and on the axis 2 (quadrature)
are also represented, together with the Gray coding of the various symbols represented by
the bits b1 ; b2 ; b3 .
A PSK transmitter for M D 8 is shown in Figure 6.27. The bit mapper maps a sequence
of log2 M bits to a constellation point with value Þn . The quadrature components Þn;I and
Þn;Q are input to interpolator filters h Tx . The filter output signals are multiplied by the
carrier signal, cos.2³ f 0 t/, and by the carrier signal phase-shifted by ³=2, for example,
by a Hilbert filter, sin.2³ f 0 t/, respectively. The transmitted signal (6.130) is obtained by
adding the two components.
The type 1 implementation of the ML receiver is illustrated in Figure 6.28. From the
general scheme of Figure 6.8, we note that the basis functions (6.136) are implemented
partially by a correlator with a sinusoidal signal, and partially by a filter matched to h Tx .
From Figure 6.26 we note that the decision regions are angular sectors with phase 2³=M.
For M D 2; 4; 8, simple decision rules can be defined. For M > 8 detection can be made
by observing the phase vr of the received vector11 r D [r I ; r Q ]T .
11 For the sake of notation uniformity with the following chapters, the components r and r of r will be indicated
1 2
by r I and r Q , respectively.
468 Chapter 6. Modulation theory
Figure 6.28. ML receiver, implementation type 1, of an M-PSK system for an isolated pulse.
Thresholds are set at .2³ =M/n, n D 1; : : : ; M.
Spectral efficiency:
.1=T / log2 M
¹D D log2 M (bit/s/Hz) (6.140)
1=T
We note that 0 also expresses the ratio between the energy per dimension and the variance
of the noise components; moreover 0 I D 0 if M > 2.
Symbol error probability: with equally likely signals, exploiting the symmetry of the
signalling scheme we get
Pe D P[E j sn ]
D 1 P[C j sn ]
D 1 P[r 2 Rn j a0 D n] (6.142)
ZZ
D1 pr .² I ; ² Q j a0 D n/ d² I d² Q
Rn
2π
r w M
θ
sm
vr
where
( s " s !#)
eE s =N0 ³ E s .E s =N0 / cos2 z 2E s
p .z/ D 1C e 2 cos z 1 Q cos z (6.145)
2³ N0 N0
for ³ z ³ .
The integral (6.144) cannot be solved in closed form. If E s =N0 × 1, for M ½ 4 we can
use the approximation (6.363) in (6.145) to obtain
s
Es E
s sin2 z
p .z/ ' cos ze N0 (6.146)
³ N0
0 I D 20 (6.150)
Moreover, it is ¹ D 1. This result is due to the fact that a BPSK system does not efficiently
use the available bandwidth 1=T : in fact only half of the band carries information. The
information in the other half can be deduced by symmetry and is therefore redundant.
6.4. Phase-shift keying (PSK) 471
−1
10
M=32
−2
10
M=16
−3
10
Pbit
M=8
−4
10
M=4
−5
10
M=2
−6
10
0 5 10 15 20 25 30 35
Γ=E /N (dB)
s 0
From (6.64), obtained for antipodal signals, and using (6.141), the evaluation of Pe yields
Pe D Pbit
s !
2E s
DQ (6.151)
N0
p
D Q. 20/
The transmitter and the receiver for a BPSK system are shown in Figure 6.32 and have
a very simple implementation. The bit mapper of the transmitter maps ‘0’ in ‘1’ and
‘1’ in ‘C1’ to generate NRZ binary data (see Appendix 7.A). At the receiver, the decision
element implements the “sign” function to detect NRZ binary data. The inverse bit mapping
to recover the bits of the information message is straightforward.
472 Chapter 6. Modulation theory
Figure 6.32. Schemes of transmitter and receiver for a BPSK system with '0 D 0.
and
p
Pbit ' Q 0 (6.155)
φ2
b 1 b2
01 11
s2 s1
E s = Eh /2
φ1
s3 s4
00 10
(see Figure 6.33), decisions can be made independently on r I and r Q , using a simple
threshold detector with threshold set at zero.
We observe that, for h Tx .t/ D K wT .t/, the transmitter filter is a simple holder. At the
receiver the matched filter plus sampler becomes an integrator that is cleared before each
integration over a symbol period of duration T . In other words, it consists of an integrate-
and-dump.
We assume now that the receiver recovers the carrier signal, except for a phase offset
'a . In particular, with reference to the scheme p of Figure 6.28, the reconstructed carrier is
cos.2³ f 0 t 'a /. In this case s n coincides with E s e j'a Þn , where Þn is given by (6.129).
Consequently, it is as if the constellation at the receiver were rotated by 'a . To prevent
this problem there are two strategies. By the coherent method, a receiver estimates 'a from
the received signal, and considers the original constellation for detection, using the signal
r e j 'Oa , where 'Oa is the estimate of 'a . By the differential non-coherent method, a receiver
detects the data using the difference between the phases of signals at successive sampling
instants. In other words
ž for M-PSK, the phase of the transmitted signal at instant kT is given by (6.126),
with
² ¦
³ 3³ .2M 1/³
k 2 ; ;:::; (6.156)
M M M
6.5. Differential PSK (DPSK) 475
k D 0
k C 'a (6.158)
In any case,
k k1 D k (6.159)
and the ambiguity of 'a is removed. For phase-modulated signals, three differential non-
coherent receivers that determine an estimate of (6.159) are discussed in Chapter 18.
For E s =N0 × 1, using the definition of the Marcum function Q 1 .Ð; Ð/ (see Appendix 6.A)
it can be shown that the error probability of an isolated symbol is approximated by the
following bound [2, 3]
s s !
Es ³ Es ³
Pe 1 C Q 1 1 sin ; 1 C sin
N0 M N0 M
s s (6.160)
!
Es ³ Es ³
Q1 1 C sin ; 1 sin
N0 M N0 M
12 Note that we consider a differential non-coherent receiver with which is associated a differential symbol
encoder at the transmitter (see (6.157)) or ((6.169)). However, as we will see in the next section, a differential
encoder and a coherent receiver can be used.
476 Chapter 6. Modulation theory
For Gray coding of the values of k in (6.156), the bit error probability is given by
Pe
Pbit D (6.162)
log2 M
where Pe is given by (6.161).
For M D 2, the exact formula of the error probability is [2, 3]
E
Ns
Pbit D Pe D 12 e 0 (6.163)
where
s s
Es p Es p
aD .1 1=2/ bD .1 C 1=2/ (6.165)
N0 N0
−1
10
PSK
DPSK
−2
10
−3
10
Pbit
−5
10
−6
10
5 10 15 20 25 30 35
Γ (dB)
Note that, if the previously received sample is used as a reference, DPSK gives lower
performance with respect to PSK, especially for M ½ 4, because both the current sam-
ple and the reference sample are corrupted by noise. This drawback can be mitigated if
the reference sample is constructed by using more than one previously received samples
[4]. In this way we establish a gradual transition between differential phase demodulation
and coherent demodulation. In particular, if the reference sample is constructed using the
samples received in the two previous modulation intervals, DPSK and PSK yield similar
performance [4].
BPSK system without differential encoding. The phase k 2 f0; ³ g is associated with bk
by the bit map of Table 6.3.
Differential encoder. For any c1 2 f0; 1g, we encode the information bits as
ck D ck1 ý bk bk 2 f0; 1g k½0 (6.166)
Decoder. If fcOk g are the detected coded bits at the receiver, the information bits are
recovered by
bOk D cOk ý .cOk1 / D cOk ý cOk1 (6.167)
0 0
1 ³
We note that a phase ambiguity 'a D ³ does not alter the recovered sequence fbOk g: in
fact, in this case fcOk g becomes fcOk0 D cOk ý 1g and we have
Multilevel case
Let fdk g be a multilevel information sequence, with dk 2 f0; 1; : : : ; M 1g. In this case
we have
ck D ck1 ý dk (6.169)
M
where ý denotes the modulo M sum. Because ck 2 f0; 1; : : : ; M 1g, the phase asso-
M
ciated with the bit map is k 2 f³=M; 3³ =M; : : : ; .2M 1/³ =Mg. This encoding and
bit-mapping scheme are equivalent to (6.157).
At the receiver the information sequence is recovered by
dOk D cOk ý .cOk1 / (6.170)
M
It is easy to see that an offset equal to j 2 f0; 1; : : : ; .M 1/g in the sequence fcOk g,
corresponding to a phase offset equal to f0; 2³ =M; : : : ; .M 1/2³=Mg in f k g, does not
cause errors in fdOk g. In fact,
½
cOk ý j ý cOk1 ý j D cOk ý .cOk1 / D dOk (6.171)
M M M M
Performance of a PSK system with differential encoding and coherent demodulation by
the scheme of Figure 6.28, is worse as compared to a system with absolute phase encoding.
However, for small Pe , up to values of the order of 0:1, we observe that an error in fcOk g
causes two errors in fdOk g. Approximately, Pe increases by a factor 2,14 which causes a
negligible loss in terms of 0.
To combine Gray encoding of values of ck with the differential encoding (6.169), a two
step procedure is adopted:
14 If we indicate with P
e;Ch the channel error probability, then the error probability after decoding is given
by [2]
Binary case Pbit D 2Pbit;Ch [1 Pbit;Ch ] (6.172)
Quaternary case 2
Pe D 4Pe;Ch 8Pe;Ch 3
C 8Pe;Ch 4
4Pe;Ch (6.173)
6.5. Differential PSK (DPSK) 479
1. represent the values of dk with a Gray encoder using a combinatorial table, as illus-
trated for example in Table 6.5 for M D 8;
2. determine the differentially encoded symbols according to (6.169).
1 XM
Es D En (6.180)
M nD1
Basis functions: basis functions for the signals defined in (6.177) are given by
s
2
1 .t/ D h Tx .t/ cos.2³ f 0 t/
Eh
s (6.184)
2
2 .t/ D h Tx .t/ sin.2³ f 0 t/
Eh
Vector representation:
r
Eh
sn D [Þn;I ; Þn;Q ]T n D 1; : : : ; M (6.185)
2
as illustrated in Figure 6.37 for various
qvalues of M.
We note that, except for the factor E2h , in a QAM system s n coincides with Þn .
It is important to observe
p that for the signals in (6.185) the minimum distance between
two symbols is equal to 2E h , hence
p
dmin D 2E h (6.186)
Consequently, to maintain a given dmin , for every additional bit of information, that is
doubling M, we need to increase the average energy of the system by about 3 dB, according
to the law
2 6
dmin D Es (6.187)
M 1
1
0
00
110
10
10
10
10
10
1
00
11
0
1
0
1
φ 2(via Q)
0
1 00
11
0
1 00
11 00
1100
1100
11 00
11
001
1101
01
01
01
000
11
0
1 0
1 00
11
0
1 00
11 00
1100
1100
11
M=256
0
1 00
11
110
0010
10
10
10
100
1
11
00
1
0 1 11
00
1
0 11
00 11
0011
0011
00 11
00
110
0010
10
10
10
10
1
11
00
1
0
00
1 1 11
00
1
0 11
00 11
00
M=128
11
0011
00 11
00
110
0010
10
10
10
10
1
11
00
1
0
00
1 1
0 11
00
1
0 11
00 11
00M=64 00
11
0011
00 11
110
0010
10
10
10
111
00
1
0
0
1 1 11
00
1
0 11
00 11
0011
0011
00 11
00
110
0010
10
10
10
10
1
11
00
1
0 1
0 11
00
1
0 11
00
M=32
11
0011
0011
00 11
00
11
001
01
01
01
01
00
1
11
00
1
0 1
0 11
00
3 M=16
1
0 11
00 11
0011
0011
00 11
00
00
110
1 0
1
1101
10
01
10
01
10
01
10
0
100
11
0
1
01 1
1 0 00
11
11111111111111
00000000000000
00 00
11
0
1 0
1 00
11 00
1100
1100
11 00
11
M=4
0
1 0
1 00
11
0
1 00
11 00
1100
1100
11 00
11
110
0010
10
10
10
111
00
1
0
0
1 1
0 11
00
1
0
1 3
11
00 11
0011
0011
00 11φ 1 (via I)
00
11
001
0
001
11 1
0
011
0
011
0
011
0
01
00
1
11
00
1
0
00
11
0
1
01
1 1
0
0 11
00
1
0
00
11
0
1 11
00
00
11 11
00
00
1111
00
00
1111
00
00
11 11
00
00
11
00
11
00
110
1
0
10
1
0
10
1
0
10
1
0
10
1
0
100
11
0
1
0
1
00
11
0
1 0
1
0
1 00
11
0
1
00
11
0
1 00
11
00
11 00
11
00
1100
11
00
1100
11
00
11 00
11
00
11
00
11
00
110
1
0
10
1
0
10
1
0
10
1
0
10
1
0
10
1
00
11
0
1
00
11
0
1
0
1 0
1
0
1 00
11
0
1
00
11
0
1 00
11
00
11 00
11
00
1100
11
00
1100
11
00
11 00
11
00
11
00
110
1
001
11 0
1
010
1
010
1
010
1
01
000
11
0
1
0
1
00
11
0
1 0
1
0
1 00
11
0
1
00
11
0
1 00
11
00
11 00
11
00
1100
11
00
1100
11
00
11 00
11
00
11
00
110
1
001
11 0
1
010
1
010
1
010
1
01
00
1
00
11
0
1
00
11
0
1
0
1 0
1
0
1 00
11
0
1
00
11
0
1 00
11
00
11 00
11
00
1100
11
00
1100
11
00
11 00
11
00
11
00
11
00
110
1
0
10
1
0
10
1
0
10
1
0
10
1
0
100
11
0
1
0
1
00
11
0
1 0
1
0
1 00
11
0
1
00
11
0
1 00
11
00
11 00
11
00
1100
11
00
1100
11
00
11 00
11
00
11
001
1101
01
01
01
001
1
00
11
0
1 0 00
11
0
1 00
11 00
1100
1100
11 00
11
0
1
0
1
p
Figure 6.37. Signal constellations of M-QAM. The term Eh =2 in (6.185) is normalized to one.
482 Chapter 6. Modulation theory
b1 b 2 b 3 b 4
b 3 b4
s4 s3 s2 s1
1000 1100 3 0100 0000
00
s8 s7 s6 s5
1001 1101 1 0101 0001
01
b 1 b2
10 11 01 00
The transmitter of an M-QAM system is illustrated in Figure 6.38 for M D 16. The bit
map and the signal constellation of a 16-QAM system are shown in Figure 6.39. We note
that the signals that are multiplied by the two carriers are PAM signals: in this example
they are 4-PAM signals.
The ML receiver for a 16-QAM system is illustrated in Figure 6.40. We note that, as the
16-QAM constellation is rectangular, the decision regions are also rectangular and detection
on the I and Q branches can be made independently by observing r I and r Q . In general,
however, given r D [r I ; r Q ]T , we need to compute the M distances from the points sn ,
n D 1; : : : ; M, and choose the nearest to r.
The following parameters of QAM systems are equal to those of PSK:
1
Bmin D (6.188)
T
¹ D log2 M (6.189)
6.6. AM-PM or quadrature amplitude modulation (QAM) 483
and
Es
0D (6.190)
N0
Moreover, we have
0I D 0 (6.191)
Another expression can be found in terms of 0 using (6.186), (6.183), and (6.190),
s !
1 3
Pe ' 4 1 p Q 0 (6.197)
M .M 1/
Pe
Pbit ' (6.198)
log2 M
Curves of Pbit as a function of 0 are shown in Figure 6.41. We note that, to achieve a
given Pbit , if M is increased by a factor 4, we need to increase 0 by 6 dB: in other words,
if we increase by one the number of bits per symbol, on average we need an increase of
the energy of the system of 3 dB. We arrived at the same result using the notion of dmin
in (6.187).
−1
10
M=256
−2
10
M=64
−3
10
Pbit
M=16
−4
10
M=4
−5
10
−6
10
0 5 10 15 20 25 30 35
Γ=Es/No (dB)
Figure 6.41. Bit error probability as a function of 0 for M-QAM transmission with rectangular
constellation.
6.6. AM-PM or quadrature amplitude modulation (QAM) 485
−1
10
QAM
PSK
−2
10 M=32 M=256
PSK QAM
−3
10 M=16 M=64
PSK
Pbit
M=8 QAM
−4
10 M=16
PSK
QAM
M=4
−5
10
−6
10
0 5 10 15 20 25 30 35
Γ=Es/No (dB)
Figure 6.42. Comparison between PSK and QAM systems in terms of Pbit as a function of 0.
φ2 φ2
Es s Es s
2 2
s1 s1
Es φ1 Es φ1
Es
s3
φ3
(a) M D 2. (b) M D 3.
Figure 6.43. Vector representations of sets of orthogonal signals for (a) M D 2 and (b) M D 3.
6.7. Modulation methods using orthogonal and biorthogonal signals 487
2. Non-coherent
sn .t/ D A sin.2³ f n t C 'n / 0<t <T n D 1; : : : ; M (6.204)
where the conditions
1
f n f n1 D
T
(6.205)
1 1
fn C f` D k (k integer) or else f 1 ×
T T
guarantee orthogonality among the signals. In (6.204) the uniform r.v. 'n is introduced
as each signal has an arbitrary phase.
We note that in both cases the bandwidth required by the passband modulation system is
proportional to M. In the case of coherent demodulation, we use the definition
M
Bmin D (6.206)
2T
Correspondingly, from (6.103) we have
2 log2 M
¹D (6.207)
M
and from (6.105)
2E s
0D (6.208)
N0 M
As I D M, we have
0I D 0 (6.209)
For non-coherent demodulation, we have Bmin D M=T , ¹ D .log2 M/=M, 0 D E s =.N0 M/,
and 0 I D 20.
f pn; j g n D 1; : : : ; L j D 0; : : : ; L 1 (6.211)
X
L1
sn .t/ D pn; j wTc .t j Tc / n D 1; : : : ; M 0 < t < L Tc D T (6.212)
jD0
the interval .0; T /. With reference to the interval .0; 2T /, the number of information bits
is equal to log2 M D 1: consequently, as T is the modulation interval we have L b D 1=2;
note that we also have I D 2. Then, with respect to the binary code division modulation
presented above, bandwidth and rate are halved,
L
Bmin D (6.221)
4T
0:5
RI D D 0:25 (6.222)
2
The other parameters are given by
0:25 2
¹D 2D (6.223)
.L=4T /T L
4E s
0D (6.224)
N0 L
and
Es
0I D (6.225)
N0
We note that this case can be regarded as an example of a repetition code where the same
symbol is repeated twice.
Probability of error
The ML receiver is given by the general scheme of Figure 6.8, where I D M and i
is proportional to si according to (6.200). As the various signals have equal energy, the
decision variables are given by
Z t0 ½
Un D Re[hr; sn i] D Re r.t/sn .t/ dt
Ł
n D 1; : : : ; M (6.226)
0
Assuming the signal sm is transmitted, we have
Z t0 ½
Un D E s Žnm C Re w.t/snŁ .t/ dt
0 (6.227)
p
D E s Žnm C E s wn
where wn D Re[hw; n i] is the n-th noise component. Then fUn g, n D 1; : : : ; M, are
Gaussian r.v.s with mean
mUn D E[Un ] D E s Žnm (6.228)
and cross-covariance
N0
E[.Un mUn /.U` mU` /] D E s Ž`n (6.229)
2
Hence, the r.v.s fUn g are statistically independent with variance E s N0 =2.
490 Chapter 6. Modulation theory
15 The computation of the integral (6.234) was carried out using the Hermite polynomial series expansion, as
indicated in [5, page 294].
6.7. Modulation methods using orthogonal and biorthogonal signals 491
−1
10
M=128
M=32
−2
10 M=16
M=8
M=4
M=2
−3
10
Pbit
−4
10
−5
10
−6
10
−10 −5 0 5 10 15 20
Γ=2Es/(N0M) (dB)
Figure 6.44. Bit error probability as a function of 0 for transmission with M orthogonal signals.
−1
10
−2
10
−3
10
Pbit
−4
10
M=128
−5
10 M=32
M=16
M=8
M=4
M=2
−6
10
−5 0 5 10 15 20
E / N (dB)
b 0
Figure 6.45. Bit error probability as a function of Eb =N0 for transmission with M orthogonal
signals.
492 Chapter 6. Modulation theory
Figure 6.46. Comparison between the exact error probability and the limit (6.236) for
transmission with M orthogonal signals.
Figure 6.46 shows a comparison between the error probability obtained by exact computa-
tion and the bound (6.236) for two values of M.
M E b =N0 (dB)
23 9.4
24 8.3
25 7.5
26 7.0
210 5.4
215 4.5
220 3.9
:: ::
: :
1 1:59
and, as I D M=2,
0 I D 20 (6.242)
log2 M
¹D4 (6.244)
M
4E s
0D (6.245)
N0 M
and
0I D 0 (6.246)
Probability of error
The receiver consists of M=2 correlators, or matched filters, which provide the decision
variables
M
fUn g n D 1; : : : ; (6.247)
2
The optimum receiver selects the output with the largest absolute value, jUi j; subsequently
it selects si or si depending on the sign of Ui .
To compute the probability of correct decision, we proceed as in the previous case.
Assuming that sm is taken as one of the signals of the basis, then
P[C j sm ] D P[Um > 0; jUm j > jU1 j; : : : ; jUm j > jUm1 j; jUm j
> jUmC1 j; : : : ; jUm j > jU M=2 j]
r !2 (6.248)
Z C1
1 2E
2 Þ N s
1 0
D p e [1 2Q.Þ/] M=21 dÞ
0 2³
The symbol error probability is given by
Pe D 1 P[C j sm ] (6.249)
where the first term arises from the comparison with .M 2/ orthogonal signals, and the
second arises from the comparison with an antipodal signal.
Figure 6.49 shows a comparison between the error probability obtained by exact com-
putation and the bound (6.251) for two values of M.
6.7. Modulation methods using orthogonal and biorthogonal signals 495
−1
10
−2
10
−3
10
Pbit
−4
10
M=128
−5
10 M=32
M=16
M=8
M=4
M=2
−6
10
−10 −5 0 5 10 15
Γ=4Es/(N0M) (dB)
Figure 6.47. Bit error probability as a function of 0 for transmission with M biorthogonal
signals.
−1
10
M=128
M=32
M=16
−2
M=8
10 M=4
M=2
−3
10
Pbit
−4
10
−5
10
−6
10
−2 0 2 4 6 8 10 12 14
E / N (dB)
b 0
Figure 6.48. Bit error probability as a function of Eb =N0 for transmission with M biorthogonal
signals.
496 Chapter 6. Modulation theory
Figure 6.49. Comparison between the exact error probability and the limit (6.251) for
transmission with M biorthogonal signals.
=2
where cn; j ž f1; C1g, and wQ T .t/ D p1 rect tT
T is the normalized rectangular window of
T
duration T (see (1.456)) with unit energy. Then E w is the energy of the pulse sn evaluated
1
on a generic subperiod T . Moreover, we have Bmin D 2T .
Interpreting the n 0 pulses
wQ T .t/; : : : ; wQ T .t .n 0 1/T / (6.253)
as elements of an orthonormal basis, we derive the structure of the optimum receiver.
Es D n0 Ew (6.256)
Es
EI D D Ew (6.257)
I
EI
Eb D D Ew (6.258)
RI
EI 2E w 2E b
0I D D D (6.259)
N0 =2 N0 N0
Es 2E w
0D D D 0I (6.260)
1 N0
N0 Ts
2T
Moreover, the minimum
p distance between two elements of the set of signals (6.252) is
equal to dmin D 4E w . The error probability is determined by the ratio (6.57)
2
dmin 2
dmin 2E w 2E b
u D D D D (6.261)
.2¦ I /2 2N0 N0 N0
where in the last step equation (6.258) is used.
Optimum receiver
With reference to the implementation of Figure 6.8, as the elements of the orthonormal basis
(6.253) are obtained by shifting the pulse wQ T .t/, the optimum receiver can be simplified
as illustrated in Figure 6.50, where the projections of the received signal r.t/ onto the
Figure 6.51. ML receiver for the signal set (6.252) under the assumption of uncoded
sequences.
components of the basis (6.253) are obtained sequentially. The vector components r D
[r0 ; r1 ; : : : ; rn o 1 ]T are then used to compute the Euclidean distances with each of the
possible code sequences. The scheme of Figure 6.50 yields the detected signal of the type
(6.252), or equivalently the detected code sequence cO D [cO0 ; cO1 ; : : : ; cOn o 1 ]T , according to
the ML criterion. This procedure is usually called soft-input decoding.
For the uncoded system, the receiver can be simplified by computing the Euclidean
distance component by component, as illustrated in Figure 6.51. In the binary case under
examination,
The resulting channel model (memoryless binary symmetric) is that of Figure 6.21.
In some receivers for coded systems, a simplification of the scheme of Figure 6.50
is obtained by first detecting the single components cQi ž f1; 1g according to the scheme
of Figure 6.51. Successively, the binary vector cQ D [cQ0 ; : : : ; cQn 0 1 ]T is formed. Then we
choose among the possible code sequences cn , n D 1; : : : ; 2k0 , the one that differs in the
smallest number of positions with respect to the sequence cQ . This scheme is usually called
hard -input decoding and is clearly suboptimum as compared to the scheme with soft input.
binary p 1 2E s
Q 0 2 1
antipodal (BB) 2T N0
r
M-PAM 1 3 1 2E s
M-PAM C SSB 1 2Q 0 2 log2 M
M M2 1 2T N0
r log2 M
1 6 1 Es
M-PAM C DSB 1 2Q 0 log2 M
M 2
M 1 T N0
r !
M-QAM 1 3 1 1 Es
1 p 4Q 0 log2 M log2 M
.M D L 2 / M M 1 T 2 N0
p 1 1
BPSK o 2-PSK Q 20
1 Es
r 2 1
QPSK o 4-PSK ³ T N0
M-PSK .M > 2/ 2Q 2 sin2 0 1
M log2 M log2 M
2
r !
M M log2 M 1 2E s
orthogonal (BB) .M 1/Q 0 2 log2 M
2 2T M M N0 M
r ! r !
M M M log2 M 2 4E s
biorthogonal (BB) .M 2/Q 0 CQ 0 4 log2 M
4 2 4T M M N0 M
that an equivalent approach often adopted in the literature is to give E b =N0 , related to 0 I
through (6.107), as a function of ¹, related to R I through (6.104).
A first comparison is made by assuming the same symbol error probability, Pe D 106 ,
p
for all systems. As Q. z 0 / D 106 implies z 0 ' 22, considering only the argument of the
Q function in Table 6.9, we have the following results.
1. M-PAM. From
3
0 D z0 (6.272)
M2 1
and
0I D 0 R I D log2 M (6.273)
2. M-QAM. From
3
0 D z0 (6.275)
M 1
and
0I D 0 RI D 1
2 log2 M (6.276)
we obtain
z 0 2R I
0I D .2 1/ (6.277)
3
We note that for QAM a certain R I is obtained with a number of symbols equal
to MQAM 2R I
R
p D 2 , whereas for PAM the same efficiency is reached for MPAM D
2 D MQAM .
I
we note that the multiplicative constant in front of the Q function cannot be ignored:
therefore a closed-form analytical expression for 0 I as a function of R I for a given
Pe cannot be found.
We note that, for a given value of R I , PAM and QAM require the same value of 0 I ,
whereas PSK requires a much larger value of 0 I .
An exact comparison is now made for a given bit error probability. Using the Pbit curves
previously obtained, the behavior of R I as a function of 0 I for Pbit D 106 is illustrated
in Figure 6.52.
We observe that the required 0 I is much larger than the minimum value obtained by the
Shannon limit. As will be discussed in Section 6.10, the gap can be reduced by channel
coding.
We also note that, for large R I , PAM and QAM allow a lower 0 I with respect to PSK;
moreover, orthogonal and biorthogonal modulation operate with R I < 1, and corresponding
very small values of 0 I .
502 Chapter 6. Modulation theory
Shannon
limit
Figure 6.52. 0I required for a given rate RI , for different modulation methods and bit error
probability equal to Pbit D 106 . The parameter in the figure denotes the number of symbols
M of the constellation.
and depends mainly on the energy E s of the signal and on the spectral density N0 =2 of
the noise.
In addition to the required power and bandwidth, the choice of a modulation scheme is
based on the channel characteristics and on the cost of the implementation: until recently,
for example, non-coherent receivers were preferred in radio mobile systems because of
their simplicity, even though the performance is inferior to that of coherent receivers (see
Chapter 18) [2].
We consider the transmission of signals with a given power over an AWGN channel having
noise power spectral density equal to N0 =2.
We recall the definition (6.93) of the encoder-modulator rate, R I D L b =I , where I is the
number of signal space dimensions. For example, the encoder-modulator for the 8-PAM
system with bit map defined in Table 6.1 has rate R I D 3 (bit/dim), as L b D 3 and I D 1.
From (6.95), we have the cardinality of alphabet A is equal to M D 2 R I D 8.
Let us consider for example a monodimensional transmission system (PAM) with an
alphabet of cardinality A for a given rate R I , such that L b < log2 M, that is M > 2 R I
from (6.93); the redundancy of the alphabet can be used to encode sequences of information
bits: in this case we speak of coded systems (see Example 6.7.5). Let us take a PAM system
with R I D 3 and M D 16: redundancy may be introduced in the sequence of transmitted
symbols. The mapping of sequences of information bits into sequences of coded output
symbols may be described by a finite state sequential machine. Some specific examples
will be illustrated in Chapter 12.
We recall the definition (1.135) of the passband B associated
R with the frequency response
of a channel, with bandwidth given by (1.140), B D B d f . Channel capacity is defined
as the maximum of the average mutual information between the input and output signals
of the channel [6, 7]. For transmission over an ideal AWGN channel, channel capacity is
given in bits per second by
CD 1
2 log2 .1 C 0 I / (bit/dim) (6.282)
504 Chapter 6. Modulation theory
0I − 1 : C 1
2 log2 .e/ 0 I (6.283)
0I × 1 : C ½ 1
2 log2 .0 I / (6.284)
Figure 6.53. Capacity of an ideal AWGN channel for Gaussian and M-PAM input signals.
c 1998 IEEE.]
[From Forney and Ungerboeck (1998).
equal to 106 is obtained for uncoded transmission, are also indicated [12]. We note that the
curves saturate as information cannot be transmitted with a rate larger than R I D log2 M.
Let us consider, for example, the uncoded transmission of 1 bit of information per mod-
ulation interval by a 2-PAM system, where we have a symbol error probability equal to
106 for 0 I D 13:5 dB. If the number of symbols in the alphabet A doubles, choosing
4-PAM modulation, we see that the coded transmission of 1 bit of information per modula-
tion interval with rate R I D 1 is possible, and an arbitrarily small error probability can be
obtained for 0 I D 5 dB. This indicates that a coded 4-PAM system may achieve a gain of
about 8:5 dB in signal-to-noise ratio over an uncoded 2-PAM system, at an error probability
of 106 . If the number of symbols is further increased, the additional achievable gain is
negligible. Therefore we conclude that, by doubling the number of symbols with respect to
an uncoded system, we obtain in practice the entire gain that would be expected from the
expansion of the input alphabet.
We see from Figure 6.53 that for small values of 0 I the choice of a binary alphabet
is almost optimum: in fact for 0 I < 1 (0 dB) the capacity given by (6.282) is essentially
equivalent to the capacity given by (6.287) with a binary alphabet of input symbols.
For large values of 0 I , the capacity of multilevel systems asymptotically approximates
a straight line that is parallel to the capacity of the AWGN channel. The asymptotic loss of
³ e=6 (1.53 dB) is due to the choice of a uniform distribution rather than Gaussian for the set
of input symbols. To achieve the Shannon limit it is not sufficient to use coding techniques
with equally likely input symbols, no matter how sophisticated they are: to bridge the gap
506 Chapter 6. Modulation theory
of 1.53 dB, shaping techniques are required [13] that produce a distribution of the input
symbols similar to a Gaussian distribution.
Coding techniques for small 0 I and large 0 I are therefore quite different: for low 0 I ,
the binary codes are almost optimum and the shaping of the constellation is not necessary;
for high 0 I instead constellations with more than two elements must be used. To reach
capacity, coding must be extended with shaping techniques; moreover, to reach the capacity
in channels with limited bandwidth, techniques are required that combine coding, shaping
and equalization, as we will see in Chapter 13.
High signal-to-noise ratios. We note from Figure 6.53 that for high values of 0 I it is pos-
sible to find coding methods that allow reliable transmission of several bits per dimension.
For an uncoded M-PAM system,
R I D log2 M (6.289)
bits of information are mapped into each transmitted symbol. The average symbol error
probability is given by (6.124),
r !
1 3
Pe D 2 1 Q 0I (6.290)
M M2 1
We note that Pe is function only of M and 0 I . Moreover, using (6.289) and (6.288)
we obtain
0I
0I D (6.291)
M2 1
For large M, Pe can therefore be expressed as
q q
1
Pe D 2 1 Q 30 I ' 2Q
N 30 I (6.292)
M
Figure 6.54. Bit error probability as a function of Eb =N0 for an uncoded 2-PAM system, and
symbol error probability as a function of 0 I for an uncoded M-PAM system. [From Forney
and Ungerboeck (1998). c 1998 IEEE.]
Low signal-to-noise ratios. For low values of 0 I the capacity is less than 1 and can be
approximated by binary transmission systems: consequently we refer to coding methods that
employ more binary symbols to obtain the reliable transmission of 1 bit (see Section 6.8).
For low values of 0 I it is customary to introduce the following ratio (see (6.107)):
Eb 22R I 1
D 0I (6.293)
N0 2R I
We note the following particular cases:
ž if R I − 1, then E b =N0 ³ .ln 2/ 0 I ;
ž if R I D 1=2, then E b =N0 D 0 I ;
ž if R I D 1, then E b =N0 D .3=2/ 0 I .
For low 0 I , if the bandwidth can be extended without limit for a given power, for example,
by using an orthogonal modulation with T ! 0 (see Example 6.7.3), then by increasing
the bandwidth, or equivalently the number of dimensions M of input signals, both 0 I and
R I tend to zero. For systems with limited power and unlimited bandwidth, usually E b =N0
is adopted as a figure of merit.
508 Chapter 6. Modulation theory
From (6.293) and the Shannon limit 0 I > 1, we obtain the Shannon limit in terms of
E b =N0 for a given rate R I as
Eb 22R I 1
> (6.294)
N0 2R I
This lower limit monotonically decreases with R I .
In particular, we examine again the three cases:
Eb
> ln 2 .1:59 dB/ (6.295)
N0
in other words, equation (6.295) affirms that even though an infinitely large bandwidth
is used, reliable transmission can be achieved only if E b =N0 > 1:59 dB;
ž if the bandwidth is limited, from (6.294) we find that the Shannon limit in terms of
E b =N0 is higher; for example, if R I D 1=2 the limit becomes E b =N0 > 1 (0 dB);
Coding gain
Definition 6.2
The coding gain of a coded modulation scheme is equal to the reduction in the value of
E b =N0 , or in the value of 0 or 0 I (see (11.9)), that is required to obtain a given probability
of error relative to a reference uncoded system. If the modulation rate of the coded system
remains unchanged, we typically refer to 0 or 0 I .
Let us consider as reference systems a 2-PAM system and an M-PAM system with
M × 1, for small and large values of 0 I , respectively. Figure 6.54 illustrates the bit
error probability for an uncoded 2-PAM system as a function of both E b =N0 and 0 I . For
Pbit D 106 , the reference uncoded 2-PAM system operates at about 12.5 dB from the
ultimate Shannon limit. Thus a coding gain up to 12.5 dB is possible, in principle, at this
probability of error, if the bandwidth can be sufficiently extended to allow the use of binary
codes with R I − 1; if, instead, the bandwidth can be extended only by a factor 2 with
respect to an uncoded system, then a binary code with rate R I D 1=2 can yield a coding
gain up to about 10.8 dB.
Figure 6.54 also shows the symbol error probability for an uncoded M-PAM system as a
function of 0 I for large M. For Pe D 106 , a reference uncoded M-PAM system operates
at about 9 dB from the Shannon limit: in other words, assuming a limited bandwidth system,
the Shannon limit can be achieved by a code having a gain of about 9 dB.
6.11. Optimum receivers for signals with random phase 509
Cut-off rate
It is useful to introduce the notion of cut-off rate R0 associated with a channel, for a given
modulation and class of codes [2].
We sometimes refer to R0 as a practical upper bound of the transmission bit rate. There-
fore for a given channel we can determine the minimum signal-to-noise ratio .E b =N0 /0
below which reliable transmission is not possible, assuming a certain class of coding and
decoding techniques. Typically, for codes with rate Rc D 12 (see Chapter 11), .E b =N0 /0 is
about 2 dB above the signal-to-noise ratio at which capacity is achieved.
where sn.bb/ is the complex envelope of sn , relative to the carrier frequency f 0 , with support
.0; t0 /. If in (6.297) every signal sn.bb/ has a bandwidth smaller than f 0 , then the energy of
sn is given by
Z t0 Z t0
1 .bb/ 2
En D sn2 .t/ dt D jsn .t/j dt (6.298)
0 0 2
where
sn .t;'/ D Re[sn.bb/ .t/e j' e j2³ f 0 t ]
(6.300)
D Re[sn.bb/Ł .t/e j' e j2³ f 0 t ] n D 1; 2; : : : ; M
In other words, at the receiver we assume the carrier is known, except, however, for a
phase ' that we assume to be a uniform r.v. in [³; ³ /. Receivers, which do not rely on
the knowledge of the carrier phase, are called non-coherent receivers.
We give three examples of signalling schemes that employ non-coherent receivers.
and
f1 D f0 fd f2 D f0 C fd (6.303)
where f d is the frequency deviation with respect to the carrier f 0 . We recall that if
1 1
f 1 C f 2 D k1 (k1 integer) or else f0 × (6.304)
T T
and if
2 f d T D k (k integer) (6.305)
then s1 .t; '1 / and s2 .t; '2 / are orthogonal.
The minimum value of f d is given by
1
. f d /min D (6.306)
2T
which is twice the value we find for the coherent demodulation case (6.203).
and
s2 .t;'/ D 0 (6.308)
p
where A D 4E s =T , and E s is the average energy of a pulse.
ML criterion
Given ' D p, that is for known ', the ML criterion to detect the transmitted signal has
been previously developed starting from (6.26).
The conditional probability density function of the vector r is given by
1 1
N jjρsn jj2
prja0 ;' .ρ j n; p/ D p e 0
. 2³.N0 =2// I
Z t0 Z t0 (6.310)
2 1
D K exp r.t/ sn .t; p/ dt sn2 .t; p/ dt
N0 0 N0 0
6.11. Optimum receivers for signals with random phase 511
we define the following likelihood function, which is equivalent, but not equal, to that
defined in (6.27):
Z t0
En 2
Ln [ p] D exp exp r.t/ sn .t; p/ dt n D 1; : : : ; M (6.312)
N0 N0 0
Given ' D p, the maximum likelihood criterion yields the decision rule
The dependency on the r.v. ' is removed by taking the expectation of Ln [ p] with
respect to ':16
Z ³
Ln D Ln [ p] p' . p/ d p
³
Z Z ½ (6.314)
E 1 ³ 2 t0
Nn
De 0 exp Re r.t/ sn.bb/Ł .t/ e j . pC2³ f 0 t/ dt dp
2³ ³ N0 0
1. Z ³
1
I0 .x/ D e x cos. p / d p 8 (6.317)
2³ ³
16 Averaging with respect to the phase ' cannot be considered for PSK and QAM systems, where information is
also carried by the phase of the signal.
512 Chapter 6. Modulation theory
From (6.315), the scheme first determines the real and the imaginary parts of L n starting
from sn.bb/ , and then determines the squared magnitude. Note that the available signal is
sn.bb/ .t/ e j'0 , where '0 is a constant, rather than sn.bb/ .t/: this, however, does not modify the
magnitude of L n . As shown in Figure 6.56, the generic branch of the scheme in Figure 6.55,
composed of the I branch and the Q branch, can be implemented by a complex-valued
passband filter (see (6.315)); the bold line denotes a complex-valued signal.
Alternatively, the matched filter can be real valued if it is followed by a phase-splitter:
in this case the receiver is illustrated in Figure 6.57. For the generic branch, the desired
value jL n j coincides with the absolute value of the output signal of the phase-splitter at
instant t0 ,
The cascade of the phase-splitter and the “modulo” transformation is called the envelope
detector of the signal yn .t/ (see (1.196) and (1.202)).
A simplification arises if the various signals sn.bb/ have a bandwidth B much lower than
f 0 . In this case, recalling (1.202), if yn.bb/ is the complex envelope of yn , at the matched
filter output the following relation holds
t0 |L n | 2
r(t) 2
s(bb)*
n (t0 -t)e j(2π f0 t+ ϕ 0) |.|
Now, if f 0 × B, to determine the amplitude jyn.bb/ .t/j, we can use one of the schemes of
Figure 6.58.
Figure 6.57. Non-coherent ML receiver of the type envelope detector, using passband
matched filters.
(a)
(b)
Figure 6.58. (a) Ideal implementation of an envelope detector, and (b) two simpler approximate
implementations.
T ^a
r(t) ^
U U>UTh , a0 =1
w (t)cos(2 π f t+ ϕ ) envelope 0
T 0 0 detector U<UTh , a^0 =2
and
Equivalently, if we define
p p
V1 D U1 and V2 D U2 (6.329)
we have
Now, recalling assumption (6.305), that is s1 .t; '1 / ? s2 .t; '2 /, we have
Z T 2 Z T 2
U2 D r.t/ cos.2³ f 2 t C '0 / dt C r.t/ sin.2³ f 2 t C '0 / dt
0 0 (6.331)
D w2;c
2
C w2;s
2
where
Z T
w2;c D w.t/ cos.2³ f 2 t C '0 / dt
0
Z (6.332)
T
w2;s D w.t/ sin.2³ f 2 t C '0 / dt
0
If we define
Z T
w1;c D w.t/ cos.2³ f 1 t C '0 / dt
0
Z (6.333)
T
w1;s D w.t/ sin.2³ f 1 t C '0 / dt
0
6.11. Optimum receivers for signals with random phase 517
Figure 6.61. Two receivers for a DSB modulation system with M-ary signalling and random
phase.
518 Chapter 6. Modulation theory
we have
Z T 2 Z T 2
U1 D r.t/ cos.2³ f 1 t C '0 / dt C r.t/ sin.2³ f 1 t C '0 / dt
0 0
(6.334)
2 2
AT AT
D cos.'0 '1 / C w1;c C sin.'0 '1 / C w1;s
2 2
where from (6.302) we also have
r
AT Es T
D (6.335)
2 2
As w.t/ is a white Gaussian random process with zero mean, w2;c and w2;s are two
jointly Gaussian r.v.s with
E[w2;c ] D E[w2;s ] D 0
2 2 N0 T
E[w2;c ] D E[w2;s ]D
2 2
Z T Z T
N0
E[w2;c w2;s ] D Ž.t1 t2 / cos.2³ f 2 t1 C '0 / sin.2³ f 2 t2 C '0 / dt1 dt2 D 0
0 0 2
(6.336)
Similar considerations hold for w1;c and w1;s .
Therefore V2 , with statistical power 2.N0 T =4/, has a Rayleigh probability density
v2
v2 2
pV2 .v2 / D e 2.N0 T =4/ 1.v2 / (6.337)
N0 T =4
whereas V1 has a Rice probability density function
.v12 C.AT =2/2 /
v1 2.N0 T =4/
v1 .AT =2/
pV1 .v1 / D e I0 1.v1 / (6.338)
N0 T =4 N0 T =4
Consequently equation (6.330) assumes the expression17
Z C1
Pbit D P[V1 < v2 j V2 D v2 ] pV2 .v2 / dv2
0
Z C1 Z v2
D pV1 .v1 / dv1 pV2 .v2 / dv2 (6.340)
0 0
1 21 NE s
D e 0
2
It can be shown that this result is not limited to FSK systems and is valid for any pair of
non-coherent orthogonal signals with energy E s .
−1
10
−2
10
−3
10
CO NC
−4
10
(d.e.)BPSK DBPSK
(ρ =−1)
−5
10
−6
10
5 6 7 8 9 10 11 12 13 14 15
Γ=E /N (dB)
s 0
Figure 6.62. Bit error probability as a function of 0 for BPSK and binary FSK systems, with
coherent (CO) and non-coherent (NC) detection.
520 Chapter 6. Modulation theory
18 For a more accurate evaluation of the probability of error see footnote 14 on page 478.
19 For the computation of the integral in (6.349) we recall the following result:
Z C1 s !
p Ð 1 x 1 þ
Q Þx e þ dx D 1 (6.350)
0 þ 2 þ C Þ2
6.12. Binary modulation systems in the presence of flat fading 521
We note that both the above expressions are in practice a lower limit to Pbit , as it is
assumed that an estimate of the phase ' is available, which is very hard to obtain
under fading conditions. In case the uncertainty on the phase is relevant, non-coherent
DPSK and FSK systems are valid alternatives.
3. Orthogonal binary FSK with non-coherent detection
1
Pbit D (6.353)
2 C 0avg
4. DBPSK
1
Pbit D (6.354)
2.1 C 0avg /
The various expressions of Pbit as a function of 0avg are plotted in Figure 6.63 and
compared with the case of transmission over an AWGN channel.
We note that to achieve a certain Pbit , it is required a substantially larger E s as compared
to the case of transmission over an AWGN channel, for the same N0 .
For a systematic method to determine the performance of systems in the presence of
a channel affected by multipath fading, we refer the reader to [14], and to the references
therein.
Diversity
In the previous section it became apparent that the probability of error for transmission over
channels with Rayleigh fading varies inversely proportional to the signal-to-noise ratio,
rather than exponentially as in the AWGN channel case: therefore a large transmission
power is needed to obtain good system performance. To mitigate this problem it is useful
to resort to the concept of diversity, that is exploiting channels that are independent, or
at least highly uncorrelated, for communication. The basic idea consists in providing the
receiver with several replicas of the signal via independent channels, so that the probability
is small that the attenuation due to fading is high for all the channels. There are various
diversity techniques.
1. Frequency diversity: the same signal is transmitted using several carriers, separated
from each other in frequency by an interval that is larger than the coherence bandwidth
of the channel.
522 Chapter 6. Modulation theory
Figure 6.63. Bit error probability as a function of 0avg for BPSK, DBPSK, and binary FSK
systems, for a flat Rayleigh fading channel.
2. Time diversity: the same signal is transmitted over different time slots, spaced by an
interval that is larger than the coherence time of the channel.
3. Space diversity: multiple reflections from ground and surrounding buildings can make
the power of the received signal change rapidly; by setting two or more anten-
nas close to each other, we can select the antenna that provides the signal with
higher power.
5. Combinations of the previous techniques: for the many techniques of combining avail-
able, both linear (equal gain, selection, maximal ratio) and non-linear (square law ),
we refer to Section 8.18 and to the bibliography [15, 16, 17].
a) Full duplex, when two users A and B can send information to each other simultane-
ously, not necessarily by using the same transmission channels in the two directions.
6.13. Transmission methods 523
b) Half duplex, when two users A and B can send information in only one direction at
a time, from A to B or from B to A, alternatively.
c) Simplex, when only A can send information to B, that is the link is unidirectional.
Three methods
In the following we give three examples of transmission methods which are used in practice.
a) Frequency-division duplexing (FDD): in this case the two users are assigned different
transmission bands using the same transmission medium, thus allowing full-duplex
transmission. Examples of FDD systems are the GSM, which uses a radio channel
(see Section 17.A.2), and the VDSL, which uses a twisted pair cable (see page 1146).
b) Time-division duplexing (TDD): in this case the two users are assigned different slots
in a time frame (see Section 6.13.2). If the duration of one slot is small with respect
to that of the message, we speak of full-duplex TDD systems. Examples of TDD
systems are the DECT, which uses a radio channel (see Section 17.A.6), and the
ping-pong BR-ISDN, which uses a twisted pair cable.
c) Full-duplex systems over a single band: in this case the two users transmit simulta-
neously in two directions using the same transmission band; examples are the HDSL
(see Section 17.1.1), and in general high-speed transmission systems over twisted-pair
cables for LAN applications (see Section 17.1.2). The two directions of transmission
are separated by a hybrid; the receiver eliminates echo signals by echo cancellation
techniques. We note that full-duplex transmission over a single band is possible also
over radio channels, but in practice alternative methods are still preferred because of
the complexity required by echo cancellation.
20 The access methods discussed in this section are deterministic, as each user knows exactly at which point in
time the channel resources are reserved for transmission; an alternative approach is represented by random
access techniques, e.g., ALOHA, CSMA/CD, collision resolution protocols [18] (see also Chapter 17).
524 Chapter 6. Modulation theory
2. Time division multiple access (TDMA): to each user is assigned one of the N S time
sequences (slots), whose elements identify the modulation intervals.
3. Code division multiple access (CDMA): to each user is assigned a modulation scheme
that employs one of the N0 orthogonal signals, preserving the orthogonality between
modulated signals of the various users. For example, for the case N0 D 8, to each
user may be assigned one orthogonal signal of those given in Figure 6.71; for bi-
nary modulation, within a modulation interval each user then transmits the assigned
orthogonal signal or the antipodal signal.
We give an example of implementation of the TDMA principle.
Bibliography
[6] R. G. Gallager, Information theory and reliable communication. New York: John
Wiley & Sons, 1968.
[7] T. M. Cover and J. Thomas, Elements of information theory. New York: John Wiley
& Sons, 1991.
[8] C. E. Shannon, “A mathematical theory of communication”, Bell System Technical
Journal, vol. 27, pp. 379–427 (Part I) and 623–656 (Part II), 1948.
[9] G. J. Foschini and M. J. Gans, “On limits of wireless communications in a fad-
ing environment when using multiple antennas”, Wireless Person. Commun., vol. 6,
pp. 311–335, June 1998.
[10] E. Telatar, “Capacity of multi–antenna Gaussian channels”, Europ. Trans. on
Telecomm., vol. 10, pp. 585–595, Nov.–Dec. 1999.
[11] G. Ungerboeck, “Channel coding with multilevel/phase signals”, IEEE Trans. on In-
formation Theory, vol. 28, pp. 55–67, Jan. 1982.
[12] G. D. Forney, Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian
channels”, IEEE Trans. on Information Theory, vol. 44, pp. 2384–2415, Oct. 1998.
[13] G. D. Forney, Jr., “Trellis shaping”, IEEE Trans. on Information Theory, vol. 38,
pp. 281–300, Mar. 1992.
[14] M. K. Simon and M.-S. Alouini, “Exponential-type bounds on the generalized Marcum
Q-function with application to error probability analysis over fading channels”, IEEE
Trans. on Communications, vol. 48, pp. 359–366, Mar. 2000.
[15] M. Schwartz, W. R. Bennett, and S. Stein, Communication systems and techniques.
New York: McGraw-Hill, 1966.
[16] T. S. Rappaport, Wireless communications: principles and practice. Englewood Cliffs,
NJ: Prentice-Hall, 1996.
.bm/2
1
2¦ 2
pw .b/ D p e (6.355)
2³ ¦
In Table 6.10 the values assumed by the complementary Gaussian distribution are given
for values of the argument between 0 and 8. We present below some bounds of the Q function.
2
1 1 a
bound1 : Q 1 .a/ D p 1 2 exp (6.362)
2³a a 2
2
1 a
bound2 : Q 2 .a/ D p exp (6.363)
2³a 2
2
1 a
bound3 : Q 3 .a/ D exp (6.364)
2 2
The Q function and the above bounds are illustrated in Figure 6.66.
6.A. Gaussian distribution function and Marcum function 529
where I0 is the modified Bessel function of the first type and order zero, defined in (4.216).
From (6.365), two particular cases follow:
b2
Q 1 .0; b/De 2 (6.366)
and
" #
1 .ab/2
.aCb/2
1 e 2 e 2 Q 1 .a; b/ a>b½0 (6.371)
2
We observe that in (6.370) the upper bound is very tight, and the lower bound for a given
value of b becomes looser as a increases. In (6.371) the lower bound is very tight. A
recursive method for computing the Marcum function is given in [19].
6.B. Gray coding 531
In this appendix we give the procedure to construct a list of 2n binary words of n bits,
where adjacent words differ in only one bit.
The case for n D 1 is immediate. We have two words with two possible values
0
(6.372)
1
The list for n D 2 is constructed by considering first the list of .1=2/22 D 2 words that are
obtained by appending a 0 in front of the words of the list (6.372):
0 0
(6.373)
0 1
The remaining two words are obtained by inverting the order of the words in (6.372) and
appending a 1 in front:
1 1
(6.374)
1 0
0 0
0 1
(6.375)
1 1
1 0
Iterating the procedure for n D 3, the first 4 words are obtained by repeating the list
(6.375) and appending a 0 in front of the words of the list.
Inverting then the order of the list (6.375) and appending a 1 in front, the final result is
the list of 8 words
0 0 0
0 0 1
0 1 1
0 1 0
(6.376)
1 1 0
1 1 1
1 0 1
1 0 0
In addition to the widely known PAM, two other baseband pulse modulation techniques
are pulse position modulation (PPM) and pulse duration modulation (PDM).
PPM consists of a set of pulses whose shift, with respect to a given time reference,
depends on the value of the transmitted symbol. We consider the fundamental pulse shape
of Figure 6.67 and an alphabet given by
A D f0; 1; 2; 3; : : : ; M 1g (6.377)
A D f1; 2; : : : ; Mg (6.379)
Signal-to-noise ratio
In both PPM and PDM the information lies in the position of the fronts of the transmit-
ted pulses: therefore demodulation consists in finding the fronts of the pulses, which are
disturbed by noise.
If the channel bandwidth were infinite, one could receive perfectly rectangular pulses.
In practice the received pulse, with amplitude equal to A, has a rise time t R different from
g (t)
0
0 T/M T t
n =1
T/4 t
n =2
2T/4 t
n =3
3T/4 t
n =4
0 T t
n =1
T/4 t
n=2
2T/4 t
n=3
3T/4 t
n =4
0 t
T
zero, as the channel has a finite bandwidth B, and noise disturbs the reception of the pulse,
as illustrated in Figure 6.70. The detection of the front of a pulse is obtained by establishing
the instant ti in which the received signal, pulse plus noise, crosses a given threshold. The
error ".ti / is related to the noise w.ti /, amplitude A, and rise time t R of the received pulse:
".ti / w.ti /
D (6.381)
tR A
Assuming the noise stationary with PSD N0 =2 over the channel passband with bandwidth
B, the mean-square error is given by
2 2
2 tR 2 tR
E[" ] D E[w ] D N0 B (6.382)
A A
We consider the following approximated expression that links the rise time to the bandwidth
of the pulse
1
tR ' (6.383)
2B
Substitution of the above result in (6.382) yields
N0
E[" 2 ] ' (6.384)
4A2 B
On the other hand, assuming the average duration of the pulses is −0 , the signal-to-noise
ratio is given by (6.105) with Bmin D 1=.2T /, that is
2E sCh
0D (6.385)
N0
6.C. Baseband PPM and PDM 535
where
E sCh D −0 A2 (6.386)
We illustrate a procedure to obtain orthogonal binary sequences, with values f1; 1g, of
length 2m .
We consider 2m ð 2m Hadamard matrices Am , with binary elements from the set f0; 1g.
For the first orders, we have
A0D[0] (6.388)
½
0 0
A1D (6.389)
0 1
1 1
0 0
-1 -1
0 8Tc 0 8Tc
1 1
0 0
-1 -1
0 8Tc 0 8Tc
1 1
0 0
-1 -1
0 8Tc 0 8Tc
1 1
0 0
-1 -1
0 8Tc 0 8Tc
Figure 6.71. Eight orthogonal signals obtained from the Walsh code of length 8.
6.D. Walsh codes 537
2 3
0 0 0 0
60 1 0 17
A2D6
40
7 (6.390)
0 1 15
0 1 1 0
2 3
0 0 0 0 0 0 0 0
60 1 0 1 0 1 0 1 7
6 7
60 0 1 1 0 0 1 1 7
6 7
60 1 1 0 0 1 1 0 7
A3D6
60
7
7 (6.391)
6 0 0 0 1 1 1 1 7
60 1 0 1 1 0 1 0 7
6 7
40 0 1 1 1 1 0 0 5
0 1 1 0 1 0 0 1
where A N m denotes the matrix that is obtained by taking the 1’s complement of the elements
of Am .
A Walsh code of length 2m is obtained by taking the rows (or columns) of the Hadamard
matrix Am and by mapping 0 into 1. From the construction of Hadamard matrices, it is
easily seen that two words of a Walsh code are orthogonal.
Figure 6.71 shows the 8 signals obtained with the Walsh code of length 8: the signals
are obtained by interpolating the Walsh code sequences by a filter having impulse response
t Tc =2
wTc .t/ D rect (6.393)
Tc
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 7
In this chapter we will reconsider amplitude modulation (PAM and QAM, see Chapter 6)
for continuous transmission, taking into account the possibility that the transmission chan-
nel may distort the transmitted signal [1, 2]. We will also consider the effects of errors
introduced by the digital transmission on a PCM encoded message (see Chapter 5) [3].
Transmitter
Bit mapper. The bit mapper uses a one-to-one map to match a multilevel symbol to an
input bit pattern. Let us consider, for example, symbols fak g from a quaternary alphabet,
ak 2 A D f3; 1; 1; 3g. To select the values of ak we consider pairs of input bits and
map them into quaternary symbols as indicated in Table 7.1. Note that bits are mapped into
symbols without introducing redundancy, therefore we speak of uncoded transmission, or
in other words the sequence of symbols fak g is not obtained by applying channel coding.1
This situation will be maintained throughout the chapter.
1 We distinguish three types of coding: 1) source or entropy coding; 2) channel coding; and 3) line coding.
Their objectives are respectively: 1) “compress” the digital message by lowering the bit rate without losing
the original signal information (see Chapter 5); 2) increase the “reliability” of the transmission by inserting
redundancy in the transmitted message, so that errors can be detected and/or corrected at the receiver (see
Chapters 11 and 12); and 3) “shape” the spectrum of the transmitted signal (see Appendix 7.A).
540 Chapter 7. Transmission over dispersive channels
Figure 7.2. Signals at various points of a ternary PAM transmission system with alphabet
A D f1; 0; 1g.
For uncoded quaternary transmission the symbol period or modulation interval T is given
by T D 2Tb . 1=T is the modulation rate or symbol rate of the system and is measured in
Baud: it indicates the number of symbols per second that are transmitted. In general, if the
values of ak belong to an alphabet A with M elements, then
1 1 1
D (Baud) (7.3)
T log2 M Tb
7.1. Baseband digital transmission (PAM systems) 541
b2k b2kC1 ak
0 0 3
1 0 1
1 1 1
0 1 3
We note that in Section 6.3 we considered an alphabet whose elements were indices, that
is ak 2 f1; 2; : : : ; Mg. Now the values of ak are associated with fÞn g, n D 1; : : : ; M, that
is ak 2 A D f.M 1/; : : : ; 1; 1; : : : ; .M 1/g.
Modulator. For a PAM system, see (6.109), the modulator associates the symbol ak with
the amplitude of a given pulse h T x :
ak ! ak h T x .t kT / (7.4)
Therefore the modulated signal s.t/ that is input to the transmission channel is given by
X
C1
s.t/ D ak h T x .t kT / (7.5)
kD1
Transmission channel
The transmission channel is assumed to be linear and time invariant, with impulse response
gCh . Therefore the desired signal at the output of the transmission channel still has a PAM
structure. From the relation
we define
then we have
X
C1
sCh .t/ D ak qCh .t kT / (7.8)
kD1
The transmission channel introduces an effective noise w. Therefore the signal at the
input of the receive filter is given by:
Receiver
The receiver consists of three functional blocks:
1. Amplifier-equalizer filter. This block is assumed linear and time invariant with im-
pulse response g Rc . Then the desired signal is given by:
then
X
C1
s R .t/ D ak q R .t kT / (7.12)
kD1
where h 0 D q R .t0 / is the amplitude of the overall impulse response at the sampling
instant t0 . The parameter t0 is called timing phase, and its choice is fundamental for
system performance.
3. Threshold detector. From the sequence frk g we detect the transmitted sequence fak g.
The simplest structure is the instantaneous non-linear threshold detector:
aO k D Q[rk ] (7.15)
From the sequence faO k g, using an inverse bit mapper, the detected binary information
message fbO` g is obtained.
2 To simplify the notation, the sample index k, associated with the instant t0 C kT , here appears as a subscript.
7.1. Baseband digital transmission (PAM systems) 543
^a = Q[r ]
k k
1
h0
2
0 h0 rk
2
−1
Figure 7.3. Characteristic of a threshold detector for ternary symbols with alphabet A D
f1; 0; 1g, and amplitude h0 of the overall impulse response.
We recall that the receiver structure described above was optimized in Chapter 6 for an
ideal AWGN channel.
where q.t/ is the impulse response of a suitable filter. In other words, a PAM signal s.t/
may be regarded as a signal generated by an interpolator filter with impulse response q.t/,
t 2 <, as shown in Figure 7.4.
From the spectral analysis (see Example 1.9.9 on page 69) we know that s is a cyclo-
stationary process with average power spectral density given by (see (1.398)):
þ þ2
þ1 þ
PN s . f / D þþ Q. f /þþ Pa . f / (7.17)
T
where Pa is the spectral density of the message and Q is the Fourier transform of q.
From Figure 7.4 it is important to verify that by filtering s.t/ we obtain a signal that
is still PAM, with a pulse given by the convolution of the filter impulse responses. The
spectral density of a filtered PAM signal is obtained by multiplying Pa . f / in (7.17) by the
squared magnitude of the filter frequency response. As Pa . f / is periodic of period 1=T ,
then the bandwidth B of the transmitted signal is equal to that of h T x .
ak s(t)
q
T
Transmitter
Bit mapper. The bit mapper uses a map to associate a complex-valued symbol ak to an
input bit pattern. In Figure 7.6 are given two examples of constellations and corresponding
7.2. Passband digital transmission (QAM systems) 545
ak,Q
(1000) (1100) (0100) (0000)
3
and ak;I D Re [ak ] and ak;Q D Im [ak ]. 4-PSK (or QPSK) symbols are taken from an
alphabet with four elements, each identified by two bits. Similarly each element in a 16-
QAM constellation is uniquely identified by four bits.
S (bb)(f)
B= 1 (1+ρ )
2T
-B 0 B f
S (+) (f)
0 f0 -B f0 f 0 +B f
S (f)
-f0 -B -f 0 -f 0+B 0 f0 -B f0 f 0 +B f
Let f 0 (!0 D 2³ f 0 ) and '0 be, respectively, the carrier frequency (radian frequency)
and phase. We define
F
s .C/ .t/ D 12 s .bb/ .t/e j .!0 tC'0 / ! S .C/ . f / D 12 S .bb/ . f f 0 /e j'0
(7.25)
F
s.t/ D 2Refs .C/ .t/g ! S. f / D S .C/ . f / C S .C/Ł . f /
(7.26)
The transformation in the frequency domain from s .bb/ to s is illustrated in Figure 7.7.
We note that the bandwidth B of the transmitted signal is equal to twice the bandwidth
of h T x .
3 The result (7.28) needs clarification. We first consider the situation where the condition rs .bb/ s .bb/Ł .t; t − / D 0
is satisfied, as for example in the case of QAM with i.i.d. circularly symmetric symbols (see (1.407)). From
the equation (similar to (1.304)) that relates rs to rs .bb/ and rs .bb/ s .bb/Ł , as the cross-correlations are zero,
we find that the process s is cyclostationary in t of period T . Taking the average correlation in a period T ,
the results (7.27) and (7.28) follow.
We now consider the situation where rs .bb/ s .bb/Ł .t; t − / 6D 0, and in particular the case where
rs .bb/ s .bb/Ł .t; t − / is a periodic function in t of period T , as for example in the case of PAM-DSB (see
Appendix 7.C). In this situation the cross-correlations, in the equation similar to (1.304), do not vanish. If a
real value T p exists, such that T p is an integer multiple of both T and 1= f 0 , then s is cyclostationary in t
of period equal to T p . Taking the average correlation over the period T p , and expanding rs .bb/ s .bb/Ł .t; t − /
in Fourier series (in the variable t), for 1=T − f 0 it happens that the autocorrelation terms approximate the
same terms found in the previous case, and the cross-correlation terms become negligible.
548 Chapter 7. Transmission over dispersive channels
(7.29) becomes:
X
C1 X
C1
s.t/ D cos.!0 t C '0 / ak;I h T x .t kT / sin.!0 t C '0 / ak;Q h T x .t kT /
kD1 kD1
(7.31)
The block-diagram representation of (7.31) is shown in Figure 7.5 (see also Figure 6.38).
The implementation of a QAM transmitter based on (7.31) is discussed in Appendix 7.D.
3. Using the polar notation ak D jak je jk , (7.29) takes the form:
" #
X
C1
s.t/ D Re e j .!0 tC'0 / jak je jk h T x .t kT /
kD1
" #
X
C1
j .!0 tC'0 Ck /
D Re jak je h T x .t kT / (7.32)
kD1
X
C1
D jak j cos.!0 t C '0 C k /h T x .t kT /
kD1
If jak j is a constant we obtain the PSK signal (6.127), where the information bits select
only the value of the carrier phase.
Coherent receiver
In the absence of noise the general scheme of a coherent receiver is shown in Figure 7.9,
which follows the scheme of Figure 6.40.
The received signal is given by:
F
sCh .t/ D s Ł gCh .t/ ! SCh . f / D S. f /GCh . f /
(7.33)
G (f)
Ch
f0 f
S (f)
Ch
−2f0 f0 −B f0 f0 +B f
1
G (f)
Rc
SMo(f)
−2f0 f
SR (f)
−2f0 f
Figure 7.10. Frequency responses of the channel and of signals at various points of the
receiver.
.bb/
We note that, if g Rc is a non-distorting ideal filter with unit gain, then s R .t/ D .1=2/sCh .t/.
In the particular case where g Rc is a real-valued filter, then the receiver in Figure 7.9 is
simplified into that of Figure 7.5. Figure 7.10 illustrates these transformations.
We note that in the above analysis, as the channel may introduce a phase offset, the
receive carrier phase '1 may be different from the transmit carrier phase '0 .
4 We note that the term e j'0 has been moved to the receiver; therefore the signals s .bb/ and r .bb/ of Figure 7.11
are defined apart from the term e j'0 . This is the same as assuming as reference carrier e j .2³ f 0 tC'0 / .
550 Chapter 7. Transmission over dispersive channels
frequency, we can study QAM systems by the same method that we have developed for
PAM systems.
To simplify the analysis, for the study of a QAM system we will adopt the PAM model of
Figure 7.1, assuming that all signals and filters are in general complex. We note that p the
factor .1=2/e j .'1 '0 / appears in Figure 7.11. We will include the factor e j .'p1 '0 / = 2
in the impulse response of the transmission channel gCh , and the factor 1= 2 in the
impulse response g Rc . Consequently the additive noise has a spectral density equal to
.1=2/Pw.bb/ . f / D 2Pw . f C f 0 / for f ½ f 0 . Therefore the scheme of Figure 7.1 holds
also for QAM in the presence of additive noise: the only difference is that in the case
of a QAM system the noise is complex-valued with orthogonal in-phase and quadrature
components, each having spectral density Pw . f C f 0 / for f ½ f 0 .
Hence the scheme of Figure 7.12 is a reference scheme for both PAM and QAM, where
8
< GCh . f /
> for PAM
GC . f / D e j .'1 '0 / (7.38)
>
: p GCh . f C f 0 /1. f C f 0 / for QAM
2
We note that for QAM we have
Figure 7.12. Baseband equivalent model of PAM and QAM transmission systems.
With reference to the scheme of Figure 7.9, the relation between the impulse responses
of the receive filters is given by
g Rc .t/ D p1 g Rc .t/ (7.40)
2
In the following, to simplify the notation, the filter g Rc will be indicated in many passband
schemes simply as g Rc .
We summarize the definitions of the various signals in QAM systems.
1. Sequence of input symbols, fak g, sequence of symbols with values from a complex-
valued alphabet A. In PAM systems, the symbols of the sequence fak g assume real
values.
2. Modulated signal,5
X
C1
s.t/ D ak h T x .t kT / (7.41)
kD1
with
q R .t/ D qC Ł g Rc .t/ and w R .t/ D wC Ł g Rc .t/ (7.48)
In PAM systems, g Rc is a real-valued filter.
7. Signal at the decision point at instant t0 C kT ,
yk D r R .t0 C kT / (7.49)
Signal-to-noise ratio
The performance of a system is expressed as a function of the signal-to-noise ratio 0 defined
in (6.105), that we recall here.
In general, with reference to the schemes of Figures 7.1 and 7.5, for a channel out-
put signal, sCh , having minimum bandwidth Bmin , and assuming the noise w white with
Pw . f / D N0 =2, we have
2 .t/]
E[sCh MsCh E sCh
0D D D (7.51)
.N0 =2/2Bmin N0 Bmin N0 .Bmin T /
We express now E sCh in the cases of PAM and QAM systems.
PAM systems. For an i.i.d. input symbol sequence, using (1.399) we have
E sCh D Ma E qCh (7.52)
and, for Bmin D 1=.2T /, we obtain
Ma E qCh
0D (7.53)
N0 =2
where Ma is the statistical power of the data and E qCh is the energy of the pulse qCh D
h T x Ł gCh . Because for PAM, observing (7.38), we get qCh D qC , then (7.53) can be
expressed as
Ma E qC
0D (7.54)
N0 =2
7.3. Baseband equivalent model of a QAM system 553
Transmitter
The choice of the transmit pulse is quite important because it determines the bandwidth of
the system (see (7.17) and (7.28)). Two choices are shown in Figure 7.13, where
t T
1. h T x .t/ D wT .t/ D rect T 2 , with wide spectrum;
Transmission channel
The transmission channel is modelled as a time invariant linear system. Therefore it is
represented by a filter having impulse response gCh . As described in Chapter 4, the majority
of channels are characterized by frequency responses having a null at DC. Therefore the
shape of the frequency response GCh . f / is as represented in Figure 7.14, where the passband
goes from f 1 to f 2 . For transmission over cables, f 1 may be of the order of a few hundred
Hertz, whereas for radio links, f 1 may be in the range of MHz or GHz. Consequently,
PAM (possibly using a line code) as well as QAM transmission systems may be considered
over cables; for transmission over radio, instead, a PAM signal needs to be translated in
frequency (PAM-DSB or PAM-SSB), or a QAM system may be used, assuming as carrier
frequency f 0 the center frequency of the passband ( f 1 ; f 2 ). In any case, the channel is
bandlimited with a finite bandwidth f 2 f 1 .
7 The term Ma E qC =2 represents the energy of both Re[sC .t/] and Im[sC .t/], assuming that sC .t/ is circularly
symmetric (see (1.407)).
554 Chapter 7. Transmission over dispersive channels
With reference to the general model of Figure 7.12, we adopt the polar notation for GC :
GC . f / D jGC . f /je j arg GC . f / (7.58)
Let B be the bandwidth of s.t/. According to (1.144), a channel presents ideal charac-
teristics, known as Heaviside conditions for the absence of distortion, if the following two
properties are satisfied:
1. the magnitude response is a constant for j f j < B,
jGC . f /j D G0 for j f j < B (7.59)
Under these conditions, s is reproduced at the output of the channel without distortion,
that is:
sC .t/ D G0 s.t t0 / (7.61)
In practice, channels introduce both “amplitude distortion” and “phase distortion”. An
example of frequency response of a radio channel is given in Figure 4.32: the overall effect
is that the signal sC .t/ may be very different from s.t/.
In short, for channels encountered in practice conditions (7.59) and (7.60) are too
stringent; for PAM and QAM transmission systems we will refer instead to the Nyquist
criterion (7.79).
7.3. Baseband equivalent model of a QAM system 555
Receiver
We return to the receiver structure of Figure 7.12, consisting of a filter g Rc followed by a
sampler with sampling rate 1=T , and a data detector.
In general, if the frequency response of the receive filter G Rc . f / contains a factor
C.e j2³ f T /, periodic of period 1=T , such that the following factorization holds:
where G M . f / is a generic function, then the filter-sampler block before the data detector
of Figure 7.12 can be represented as in Figure 7.15, where the sampler is followed by a
discrete-time filter. It is easy to prove that in the two systems the relation between rC .t/
and yk is the same.
Ideally, in the system of Figure 7.15 yk should be equal to ak . In practice, as illustrated
in Figure 7.16, linear distortion and additive noise, the only disturbances considered here,
may determine a significant deviation of yk from the desired symbol ak .
556 Chapter 7. Transmission over dispersive channels
3 5
2
3
2
1
1
k,Q
k,Q
0 0
y
y
−1
−1
−2
−3
−2
−4
−3 −5
−3 −2 −1 0 1 2 3 −5 −4 −3 −2 −1 0 1 2 3 4 5
y k,I y k,I
Figure 7.16. Values of yk D yk,I C jyk,Q , in the presence of noise and linear distortion.
The last element in the receiver is the data detector. One of the simplest data detec-
tors is the threshold detector, that associates with each value of yk a possible value of
ak in the constellation. Using the rule of deciding for the symbol closest to the sam-
ple yk , the decision regions for a QPSK system and a 16-QAM system are illustrated in
Figure 7.17.
and
w R;k D w R .t0 C kT / (7.64)
Then, from (7.49), at the decision point the generic sample is expressed as
yk D s R;k C w R:k (7.65)
7.3. Baseband equivalent model of a QAM system 557
yk,Q
1
yk,Q -3 -1 1 3
yk,I
-1
yk,I
-3
Figure 7.17. Decision regions for a QPSK system and a 16-QAM system.
and defining
it follows that
X
C1
s R;k D ai h ki D ak h 0 C ik (7.68)
i D1
where
X
C1
ik D ai h ki D Ð Ð Ð C h 1 akC1 C h 1 ak1 C h 2 ak2 C Ð Ð Ð (7.69)
i D1; i 6Dk
represents the intersymbol interference (ISI). The coefficients fh i gi 6D0 are called interferers.
Moreover (7.65) becomes
yk D ak h 0 C ik C w R;k (7.70)
We observe that, even in the absence of noise, the detection of ak from yk by a threshold
detector takes place in the presence of the term ik , which behaves as a disturbance with
respect to the desired term ak h 0 .
For the analysis, it is often convenient to approximate ik as noise with a Gaussian
distribution: the more numerous and similar in amplitude are the interferers, the more valid
558 Chapter 7. Transmission over dispersive channels
is this approximation. In the case of i.i.d. symbols, the first two moments of ik are easily
determined.
X
C1
Mean value of ik : m i D ma hi (7.71)
i D1; i 6D0
X
C1
Variance of ik : ¦i2 D ¦a2 jh i j2 (7.72)
i D1; i 6D0
From (7.65), with fs R;k g given by (7.68), we derive the discrete-time equivalent scheme,
with period T (see Figure 7.18), that relates the signal at the decision point to the data
transmitted over a discrete-time channel with impulse response given by the sequence fh i g,
called overall discrete-time equivalent impulse response of the system.
Concerning the additive noise fw R;k g,8 being
Pw R . f / D PwC . f /jG Rc . f /j2 (7.73)
the PSD of fw R;k g is given by
X
C1
1
Pw R;k . f / D Pw R f ` (7.74)
`D1
T
In any case, the variance of w R;k is equal to that of w R and is given by
Z C1
¦w2 R;k D ¦w2 R D PwC . f /jG Rc . f /j2 d f (7.75)
1
In particular, the variance per dimension of the noise is given by
PAM ¦ I2 D E[w 2R;k ] D ¦w2 R (7.76)
QAM ¦ I2 D E[.Re[w R;k ]/2 ] D E[.I m[w R;k ]/2 ] D 12 ¦w2 R (7.77)
In the case of PAM (QAM) transmission over a channel with white noise, where
PwC . f / D N0 =2 (N0 ), (7.75) yields a variance per dimension equal to
N0
¦ I2 D E g Rc (7.78)
2
where E g Rc is the energy of the receive filter. We observe that (7.78) holds for PAM as
well as for QAM.
Nyquist pulses
The problem we wish to address consists in finding the conditions on the various filters of
the system, so that, in the absence of noise, yk is a replica of ak . The solution is the Nyquist
criterion for the absence of distortion in digital transmission.
From (7.68), to obtain yk D ak it must be:
and ISI vanishes. A pulse h.t/ that satisfies the conditions (7.79) is said to be a Nyquist
pulse with modulation interval T .
The conditions (7.79) have their equivalent in the frequency domain. They can be derived
using the Fourier transform of the sequence fh i g (1.90),
X
C1
1 XC1
`
h i e j2³ f i T D H f (7.80)
i D1
T `D1 T
where H. f / is the Fourier transform of h.t/. From the conditions (7.79) the left-hand side
of (7.80) is equal to 1, hence the condition for the absence of ISI is formulated in the
frequency domain for the generic pulse h as:
X
C1
`
H f DT (7.81)
`D1
T
From (7.81) we deduce an important fact: the Nyquist pulse with minimum bandwidth is
given by:
t F f
h.t/ D h 0 sinc ! H. f / D Th 0 rect
(7.82)
T 1=T
Definition 7.1
The frequency 1=.2T /, which coincides with half of the modulation frequency, is called
Nyquist frequency.
Figure 7.19. Time and frequency plots of raised cosine and square root raised cosine pulses
for three values of the roll-off factor ².
We define
8 1²
>
> 1 0 jxj
>
> 2
> 0
>
>
>
< 1² 1
³ jxj 1² 1C²
rcos.x; ²/ D cos2 B
@
2 C
A < jxj (7.83)
>
> 2 ² 2 2
>
>
>
>
>
> 1C²
:0 jxj >
2
then
f
H. f / D T rcos ;² (7.84)
1=T
7.3. Baseband equivalent model of a QAM system 561
In this case
Z s
C1 f 4
h.0/ D T rcos ;² df D 1 ² 1 (7.90)
1 1=T ³
We note that H. f / in (7.88) is not the frequency response of a Nyquist pulse. Plots of h.t/
and H. f /, given respectively by (7.89) and (7.88), for various values of ² are shown in
Figure 7.19b.
The parameter ², called excess bandwidth parameter or roll-off factor, is in the range
between 0 and 1. We note that ² determines how fast the pulse decays in time.
Observation 7.1
From the Nyquist conditions we deduce that:
1. a data sequence can be transmitted with modulation rate 1=T without errors if H. f /
satisfies the Nyquist criterion and there is no noise;
2. the channel, with frequency response GC , must have a bandwidth equal to at least
1=.2T /, otherwise intersymbol interference cannot be avoided.
Eye diagram
From (7.68) we observe that if the samples fh i g, for i 6D 0, are not sufficiently small with
respect to h 0 , the ISI may result a dominant disturbance with respect to noise and impair
the performance of the system. On the other hand, from (7.66) and (7.67) the discrete-time
impulse response fh i g depends on the choice of the timing phase t0 (see Chapter 14) and
on the pulse shape q R .
In the absence of noise, at the decision point the sample y0 , as a function of t0 , is given by
X
C1
y0 D y.t0 / D ai q R .t0 i T /
i D1 (7.92)
D a0 q R .t0 / C i0 .t0 /
where
X
C1
i0 .t0 / D ai q R .t0 i T /
i D1; i 6D0 (7.93)
D Ð Ð Ð C a1 q R .t0 C T / C a1 q R .t0 T / C a2 q R .t0 2T / C Ð Ð Ð
is the ISI.
We now illustrate, through an example, a graphic method to represent the effect of the
choice of t0 for a given pulse q R . We consider a PAM transmission system where y.t0 / is
real: for a QAM system, both Re[y.t0 /] and Im[y.t0 /] need to be represented. We consider
quaternary transmission with
ak D Þn 2 A D f3; 1; 1; 3g (7.94)
and pulse q R as shown in Figure 7.20.
In the absence of ISI, i0 .t0 / D 0 and y0 D a0 q R .t0 /. In relation to each possible value
Þn of a0 , the pattern of y0 as a function of t0 is shown in Figure 7.21: it is seen that
the possible values of y0 , for Þn 2 A, are further apart, therefore they offer a greater
7.3. Baseband equivalent model of a QAM system 563
1.5
1
q (t)
0.5
R
−0.5
−T 0 T 2T 3T 4T
t
Figure 7.20. Pulse shape for the computation of the eye diagram.
αn=3
1 αn=1
αnqR(t0)
−1 α =−1
n
−2
αn=−3
−3
−T 0 T 2T 3T 4T
t0
margin against noise in relation to the peak of q R , which in this example occurs at instant
t0 D 1:5T . In fact, for a given t0 and for a given message : : : ; a1 ; a1 ; a2 ; : : : , it may result
in i0 .t0 / 6D 0, and this value is added to the desired sample a0 q R .t0 /.
The range of variations of y0 .t0 / around the desired sample Þn q R .t0 / is determined by
the values
imax .t0 ; Þn / D max i0 .t0 / (7.95)
fak g; a0 DÞn
If the symbols fak g are statistically independent with balanced values, that is both Þn
and Þn belong to A, defining
Þmax D max Þn (7.98)
n
and
X
C1
iabs .t0 / D Þmax jq R .t0 i T /j (7.99)
i D1; i 6D0
we have that
imax .t0 / D iabs .t0 / (7.100)
imin .t0 / D iabs .t0 / (7.101)
We note that both functions do not depend on a0 D Þn .
For the considered pulse, the eye diagram is given in Figure 7.22. We observe that as
a result of the presence of ISI, the values of y0 may be very close to each other, and
therefore reduce considerably the margin against noise. We also note that, in general, the
timing phase that offers the largest margin against noise is not necessarily found in relation
to the peak of q R . In this example, however, the choice t0 D 1:5T guarantees the largest
margin against noise.
In the general case, where there exists correlation between the symbols of the sequence
fak g, it is easy to show that imax .t0 ; Þn / iabs .t0 / and imin .t0 ; Þn / ½ iabs .t0 /. Conse-
quently the eye may be wider as compared to the case of i.i.d. symbols.
For quaternary transmission, we show in Figure 7.23 the eye diagram obtained with a
raised cosine pulse q R , for two values of the roll-off factor.
In general, the M 1 “pupils” of the eye diagram have a shape as illustrated in
Figure 7.24, where two parameters are identified: the height a and the width b. The height
a is an indicator of the noise immunity of the system. The width b indicates the immunity
with respect to deviations from the optimum timing phase. For example a raised cosine pulse
with ² D 1 offers greater immunity against errors in the choice of t0 as compared to the
case ² D 0:125. The price we pay is a larger bandwidth of the transmission channel.
7.3. Baseband equivalent model of a QAM system 565
3 αnqR(t0)+imax(t0;αn)
2 αnqR(t0)+imin(t0;αn)
1
y0(t0)
−1
−2
−3
−T 0 T 2T 3T 4T
t0
Figure 7.22. Eye diagram for quaternary transmission and pulse qR of Figure 7.20.
We now illustrate an alternative method to obtain the eye diagram. A long random
sequence of symbols fak g is transmitted over the channel, and the portions of the curve
y.t/ D s R .t/ relative to the various intervals [t1 ; t1 C T /; [t1 C T; t1 C 2T /; [t1 C 2T; t1 C
3T /; : : : ], are mapped on the same interval, for example, on [t1 ; t1 C T /. Typically, we
select t1 so that the center of the eye falls in the center of the interval [t1 ; t1 C T /. Then
the contours of the obtained eye diagram correspond to the different profiles (7.97). If the
contours of the eye do not appear, it means that for all values of t0 the worst case ISI
is larger than the desired component and the eye is shut. We note that, if the pulse q R .t/,
t 2 <, has a duration equal to th , and define Nh D dth =T e, we must omit plotting the values
of y.t/ for the first and last Nh 1 modulation intervals, as they would be affected by the
transient behavior of the system. Moreover, for transmission with i.i.d. symbols, at every
instant t 2 < the number of symbols fak g that contribute to y.t/ is at most equal to Nh . To
plot the eye diagram we need in principle to generate all the M-ary symbol sequences of
length Nh : in this manner we will reproduce the values of y.t/ in correspondence of the
various profiles.
−1
−2
−3
−4
−5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
t/T
(a)
−1
−2
−3
−4
−5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
t/T
(b)
Figure 7.23. Eye diagram for quaternary transmission and raised cosine pulse qR with roll-off
factor: (a) ² D 0:125 and (b) ² D 1.
a
b
t
0
For a memoryless decision rule on yk , i.e. regarding yk as an isolated sample, and still
considering the ML detection criterion described in Section 6.1, we have the following
correspondences:
r R;k D yk D [yk;I ; yk;Q ] ! r D [r1 ; r2 ]T (7.103)
ak D [ak;I ; ak;Q ] ! s D [s1 ; s2 ]T (7.104)
w R;k D [Re[w R;k ]; Im[w R;k ]] ! w D [w1 ; w2 ]T (7.105)
If w R;k has a circularly symmetric Gaussian probability density function, as is the case
if equation (7.43) holds, and the values assumed by ak are equally likely, then, given the
observation yk , the detection criterion leads to choosing the value of Þn 2 A that is closest
to yk .9 Moreover, the error probability depends on the distance dm between adjacent signals,
in this specific case between adjacent values Þn 2 A, and on the variance per dimension
of the noise w R;k , ¦ I2 .
Hence, defining
dm 2
D (7.106)
2¦ I
and using (6.122) and (6.196), we have
1 p
M-PAMPe D 2 1 Q. / (7.107)
M
1 p
M-QAM Pe ' 4 1 p Q. / (7.108)
M
We note that, for the purpose of computing Pe , only the variance of the noise w R;k is
needed and not its PSD. We also note that (7.106) coincides with (6.57).
With reference to Table 7.1 and to Figure 7.6, we consider dm D 2h 0 D 2. Now, for a
channel with white noise, ¦ I2 is given by (7.78), hence from (7.106) it follows that
2
D (7.109)
N0 E g Rc
Apparently, the above equation could lead to choosing a filter g Rc with very low energy,
so that × 1. However, here g Rc is not arbitrary, but it must be chosen such that the
condition (7.79) for the absence of ISI is satisfied. We will see in Chapter 8 a criterion to
design the filter g Rc .
The general case of computation of Pe in the presence of ISI and non-Gaussian noise is
given in Appendix 7.B.
9 We observe that this memoryless decision criterion is optimum only if the noise samples fw R;k g are statistically
independent.
568 Chapter 7. Transmission over dispersive channels
Assuming that the pulse qC that determines the signal at the channel output is given,
the solution (see Section 1.10 on page 73) is provided by the receive filter g Rc matched
to qC : hence the name matched filter (MF). In particular, with reference to the scheme of
Figure 7.12 and for white noise wC , we have
G Rc . f / D K Q C
Ł
. f / e j2³ f t0 (7.110)
we obtain
1
K D (7.112)
E qC
Using passband filters, the QAM scheme of Figure 7.25 is modified into the scheme of
Figure 7.26, where the impulse responses of the transmit filters are given by
. pb/
h T x;I .t/ D h T x .t/ cos.2³ f 0 t/ (7.115)
. pb/
h T x;Q .t/ D h T x .t/ sin.2³ f 0 t/ (7.116)
kD1
(7.119)
X
1
. pb/ . pb/
D aQ k;I h T x;I .t kT / aQ k;Q h T x;Q .t kT /
kD1
570 Chapter 7. Transmission over dispersive channels
where aQ k;I D Re[ak e j2³ f 0 kT ] and aQ k;Q D Im[ak e j2³ f 0 kT ]. Therefore in the scheme of
Figure 7.26, the input to the modulation filters at instant k is given by aQ k D ak e j2³ f 0 kT ,
which is equal to the symbol ak with an additional deterministic phase that must be removed
at the receiver.
CAP modulation is obtained by leaving out the additional phase, as shown in Figure 7.27.
If we define
. pb/ . pb/
hQ T x .t/ D h T x .t/e j2³ f 0 t D h T x;I .t/ C j h T x;Q .t/ (7.120)
. pb/ . pb/
Because the pulses h T x;I .t/ and h T x;Q .t/, filtered by the transmission channel, are
still related by the Hilbert transform, the receiver uses a passband matched filter of the
phase-splitter type, implemented by two real-valued filters with impulse responses given by
(7.117) and (7.118) (see Figure 7.27). In general, if the transfer function of the transmission
medium is unknown a priori, the receive filters are adaptive (see Chapter 8).
We note that CAP modulation is equivalent to QAM, with the difference that in a QAM
system the input symbols fak g are substituted by the rotated symbols faQ k g. If f 0 is an
integer multiple of 1=T , then there is no difference between CAP and QAM. QAM is
usually selected if f 0 × 1=.2T /. In the case where f 0 is not much larger than 1=.2T /,
as usually occurs in data transmission systems over metallic cables, CAP modulation may
be preferred to QAM because it does not need carrier recovery. On the other hand, for
transmission channels that introduce frequency offset, an acquisition mechanism of the
carrier must be introduced and typically QAM is adopted.
7.5. Regenerative PCM repeaters 571
Correspondingly we give in Figure 7.29 the statistical model associated with a memory-
less binary symmetric channel. In the following it is useful to evaluate the error probability
of words c composed of b bits:
assuming errors are i.i.d.. On the other hand, if Pbit − 1 it follows that .1 Pbit /b '
1 bPbit and
1- Pbit
1 1
^
bl Pbit Pbit bl
0 0
1- Pbit
Figure 7.30. Composite transmission scheme of an analog signal via a binary channel.
7.5. Regenerative PCM repeaters 573
We assume the binary channel symmetric and memoryless. Hence, if we express the
generic detected bit at the output of the binary channel as cQ j .k/ 2 f0; 1g, the error probability
is given by:
c.k/
Q ! sQq .kTc / D Q ıQ (7.128)
where
8
> X
b1
< ıQ D cQ j .k/2 j
(7.129)
>
:
jD0
Q ıQ D −sat C .Qı C 12 /1
Given the one-to-one map between words and quantizer levels, using (7.124) it follows that
2. the errors on the detection of the binary sequence at the output of the binary channel.
where the two error terms are assumed uncorrelated, as they are to be ascribed to different
phenomena. In particular, assuming the quantization noise uniform, from (5.41) it follows
12 −2
Meq D D sat2b (7.134)
12 3Ð2
574 Chapter 7. Transmission over dispersive channels
The computation of MeCh is somewhat more difficult. First, from (7.126) and (7.132)
we have
X
b1
eCh .kTc / D 1 .cQ j .k/ c j .k//2 j (7.135)
jD0
55
50
b=8
45
40
b=6
35
(dB)
30
PCM
25 b=4
Λ
20
15
b=2
10
0
−8 −7 −6 −5 −4 −3 −2 −1 0
10 10 10 10 10 10 10 10 10
P
bit
Using (7.134) and (7.142), and for a signal-to-quantization noise ratio 3q D Ms =.12 =12/
(see (5.33)), we get
3q
3PCM D (7.144)
1 C 4Pbit .22b 1/
We note that usually Pbit is such that Pbit 22b − 1: thus it results 3PCM ' 3q , that is the
output error is mainly due to the quantization error.
In particular, for a signal s 2 U.−sat ; −sat ] whereby 3q D 22b , equation (7.144) is
represented in Figure 7.31 for various values of b. For Pbit < 1=.4 Ð 22b / the output
signal is corrupted mainly by the quantization noise, whereas for Pbit > 1=.4 Ð 22b / the
output is affected mainly by errors introduced by the binary channel. For example for
Pbit D 104 , going from b D 6 to b D 8 bits per sample yields an increment of 3PCM of
only 2 dB.
We observe that in the general case of non-uniform quantization there are no sim-
ple expressions similar to (7.142) and (7.144); however, the above observations remain
valid.
Analog transmission
The only solution possible in an analog transmission system is to place analog repeaters
consisting of amplifiers with suitable filters to restore the level of the signal and eliminate
the noise outside the passband of the desired signal. The cascade of amplifiers along a
transmission line, however, deteriorates the signal-to-noise ratio.
We consider the simplified scheme of Example 4.2.2 on page 271 with
ž sCh .t/, desired signal at the output of transmission section i, with available power PsCh ;
ž r.t/ D sCh .t/ C w.t/, overall signal at the amplifier input of repeater i;
1
PsCh D Ps (7.145)
ac
In this example both the transmission channel and the various amplifiers do not introduce
distortion; the only disturbance in sQ .t/ is due to additive noise introduced by the various
devices.
For a source at noise temperature T0 , if F A is the noise figure of a single amplifier, the
signal-to-noise ratio at the amplifier output of a single section is given by (4.92):
PsCh
3D (7.146)
kT0 F A B
Analogously for N analog repeater sections, as the overall noise figure is equal to F D N Fsr
(see (4.77)), the overall signal-to-noise ratio, expressed as
E[s 2 .t/]
3a D (7.147)
E[jQs .t/ s.t/j2 ]
is given by
Ps 3
3a D D (7.148)
kT0 FB N
Obviously in the derivation of (7.148) it is assumed that (4.83) holds, as a statistical power
ratio is equated with an effective power ratio.
Hence, in a system with analog repeaters, the noise builds up repeater after repeater and
the overall signal-to-noise ratio worsens as the number of repeaters increases. Moreover,
it must be remembered that in practical systems, possible distortion experienced by the
desired signal through the various transmission channels and amplifiers also accumulates,
contributing to an increase of the disturbance in sQ .t/.
7.5. Regenerative PCM repeaters 577
Digital transmission
In a digital transmission system, as an alternative to the simple amplification of the received
signal r.t/, we can resort to the regeneration of the signal. With reference to the scheme of
Figure 7.32, given the signal r.t/, the digital message fbO` g is first reconstructed, and then
re-transmitted by a modulator.
Modeling each regenerative repeater by a memoryless binary symmetric channel (see
Definition 6.1 on page 457) with error probability Pbit , and ignoring the probability that a
bit undergoes more errors along the various repeaters, the bit error probability at the output
of N regenerative repeaters is equal to10
Pbit;N ' 1 .1 Pbit / N ' N Pbit (7.149)
assuming Pbit − 1, and errors of the different repeaters statistically independent.
To obtain an expression of Pbit , it is necessary to specify the type of modulator. Let us
consider an M-PAM system; then from (6.125) we get
r !
2.M 1/ 3
Pbit D Q 0 (7.150)
M log2 M M2 1
10 We note that a more accurate study shows that the errors have a Bernoulli distribution [4].
11 To simplify the notation, we have indicated with the same symbol s
Ch the desired signal at the amplifier
input for both analog transmission and digital transmission. Note, however, that in the first case sCh depends
linearly on s, whereas in the second it represents the modulated signal that does not depend linearly on s.
578 Chapter 7. Transmission over dispersive channels
Rb D b 2B (7.154)
Consequently, for an M-PAM modulator, the modulation interval T is equal to log2 M=Rb ,
and the minimum bandwidth of the transmission channel is equal to
1 b
Bmin D D B (7.155)
2T log2 M
We note that the digital transmission of an analog signal may require a considerable ex-
pansion of the required bandwidth, if M is small. Obviously, using a more efficient digital
representation of waveforms, for example by CELP, and/or a modulator with higher spec-
tral efficiency, for example, by resorting to multilevel transmission, Bmin may result very
close to B or even smaller.
Using (7.155) in (7.151), from (7.146) we have
log2 M
0D 3 (7.156)
b
The comparison between the two systems is based on the overall signal-to-noise ratio for the
same transmitted power and transmission channel characteristics. To simplify the notation,
initially we will consider a 2-PAM as modulator.
Substituting the value of 0 given by (7.156) for M D 2 in (7.152) and (7.153), and
recalling (7.144), valid for a uniform quantizer with 3q D 22b , that is assuming a uniform
signal, see (5.44), we get
8
>
> 22b
>
> q N analog repeaters
>
> 3
>
< 1 C 4.22b 1/Q bN
3PCM D 2b (7.157)
>
> 2
>
> q N regenerative repeaters
>
> 3
>
: 1 C 4.22b 1/N Q b
45
b=7
40
b=6
35
b=5
30
b=4
(dB)
25
PCM
20
Λ
b=3
15
b=2
10
0
0 5 10 15 20 25 30 35 40 45
Λ a
(dB)
Figure 7.33. 3PCM as a function of 3a for analog repeaters and 2-PAM. The parameter b
denotes the number of bits for linear PCM representation.
45
b=7
40
b=6
35
b=5
30
b=4
Λ PCM (dB)
25
20 b=3
15
b=2
10
0
0 5 10 15 20 25 30 35 40 45
Λ a (dB)
45
Λ (N=10)
PCM
40
Λ (N=100)
PCM
35
30
ΛPCM(N=1000)
Λ PCM , Λ a (dB)
25 Λa(N=10)
20 Λa(N=100)
15 Λ (N=1000)
a
10
0
10 15 20 25 30 35 40 45 50 55 60 65
Λ (dB)
Figure 7.35. 3a for analog transmission obtained by varying the number N of analog
repeaters, and 3PCM for digital transmission with 2-PAM and b D 7, obtained by varying the
number N of regenerative repeaters, as a function of 3 (signal-to-noise ratio of each repeater
section). The dashed line represents 3PCM for 128-PAM and b D 7.
7. Bibliography 581
3PCM D 3a D 36 dB (7.159)
Bibliography
[1] L. W. Couch, Digital and analog communication systems. Upper Saddle River, NJ:
Prentice-Hall, 1997.
[3] M. S. Roden, Analog and digital communication systems. Upper Saddle River, NJ:
Prentice-Hall, 1996.
[4] A. Papoulis, Probability, random variables and stochastic processes. New York:
McGraw-Hill, 3rd ed., 1991.
582 Chapter 7. Transmission over dispersive channels
[5] S. Benedetto and E. Biglieri, Principles of digital transmission with wireless applica-
tions. New York: Kluwer Academic Publishers, 1999.
[6] P. Kabal and P. Pasupathy, “Partial-response signaling”, IEEE Trans. on Communica-
tions, vol. 23, pp. 921–934, Sept. 1975.
[7] D. L. Duttweiler, J. E. Mazo, and D. G. Messerschmitt, “An upper bound on the error
probability in decision-feedback equalization”, IEEE Trans. on Information Theory,
vol. 20, pp. 490–497, July 1974.
[8] G. Birkoff and S. MacLane, A survey of modern algebra. New York: Macmillan
Publishing Company, 3rd ed., 1965.
[9] D. G. Messerschmitt and E. A. Lee, Digital communication. Boston, MA: Kluwer
Academic Publishers, 2nd ed., 1994.
[10] B. R. Saltzberg, “Intersymbol interference error bounds with application to ideal ban-
dlimited signaling”, IEEE Trans. on Information Theory, vol. 9, pp. 563–568, July
1968.
[11] R. Gitlin, J. Hayes, and S. Weinstein, Data communication principles. New York:
Plenum Press, 1992.
[12] M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of communication
systems. New York: Plenum Press, 1992.
7.A. Line codes for PAM systems 583
NRZ−L NRZ−M
2 2
1 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1
1 1
0 0
−1 −1
−2 −2
0 2 4 6 8 10 0 2 4 6 8 10
t/T t/T
1 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1
1 1
0 0
−1 −1
−2 −2
0 2 4 6 8 10 0 2 4 6 8 10
t/T t/T
2. Polar RZ:
“1” and “0” are represented by opposite pulses with duration equal to half a bit
interval.
3. Bipolar RZ or alternate mark inversion (AMI):
Bits equal to “1” are represented by rectangular pulses having duration equal to half
a bit interval, sequentially alternating in sign, bits equal to “0” by the zero level.
4. Dicode RZ:
A change of polarity in the sequence fb` g, “1-0” or “0-1”, is represented by a level
transition, using a pulse having duration equal to half a bit interval; every other case
is represented by the zero level.
RZ line codes are illustrated in Figure 7.39.
Unipolar RZ Polar RZ
2 2
1.5 1 0 1 1 0 0 0 1 1 0 1
1 0 1 1 0 0 0 1 1 0 1 1
1
0.5 0
0
−1
−0.5
−1 −2
0 2 4 6 8 10 0 2 4 6 8 10
t/T t/T
Bipolar RZ Dicode RZ
2 2
1 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1
1 1
0 0
−1 −1
−2 −2
0 2 4 6 8 10 0 2 4 6 8 10
t/T t/T
do not create synchronization problems. It is easy to see, however, that this line code
leads to a doubling of the transmission bandwidth.
2. Biphase mark (B--M) or Manchester 1:
A transition occurs at the beginning of every bit interval; “1” is represented by a
second transition within the bit interval, “0” is represented by a constant level.
3. Biphase space (B--S):
A transition occurs at the beginning of every bit interval; “0” is represented by a
second transition within the bit interval, “1” is represented by a constant level.
Biphase−L Biphase−M
2 2
1 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1
1 1
0 0
−1 −1
−2 −2
0 2 4 6 8 10 0 2 4 6 8 10
t/T t/T
1 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1
1 1
0 0
−1 −1
−2 −2
0 2 4 6 8 10 0 2 4 6 8 10
t/T t/T
the constraint
2K M N (7.160)
The KBNT codes are an example of block line codes where the output symbol alphabet is
ternary f1; 0; 1g.
From (7.161), the relation between the PSDs of the sequences fak g and fbk g is given by
Pa . f / D Pb . f / j1 e j2³ f T j2
D Pb . f / 4 sin2 .³ f T /
7.A. Line codes for PAM systems 587
Therefore Pa . f / exhibits zeros at frequencies that are integer multiples of 1=T , in particular
at f D 0. Moreover, from (7.161) we have ma D 0, independently of the distribution of fbk g.
If the power of the transmitted signals is constrained, a disadvantage of the encoding
method (7.161) is a reduced noise immunity with respect to antipodal transmission, that is
for ak 2 f1; 1g, because a detector at the receiver must now decide among three levels.
Moreover, long sequences of information bits fbOk g that are all equal to 1 or 0 generate
sequences of symbols fak g that are all equal: this is not desirable for synchronization.
In any case, the biggest problem is the error propagation at the decoder, which, observing
(7.162), given that an error occurs in faO k g, generates a sequence of bits fbOk g that are in
error until another error occurs in faO k g. This problem can be solved by precoding: from the
sequence of bits fbk g we first generate the sequence of bits fck g, with ck 2 f0; 1g, by
ck D bk ý ck1 (7.164)
where ý denotes the modulo 2 sum. Next,
ak D ck ck1 (7.165)
with ak 2 f1; 0; 1g. Hence, it results in
(
š1 if bk D 1
ak D (7.166)
0 if bk D 0
In other words, a bit bk D 0 is mapped into the symbol ak D 0, and a bit bk D 1 is
mapped alternately in ak D C1 or ak D 1. Consequently, from (7.166) decoding may be
performed simply by taking the magnitude of the detected symbol:
bOk D jaO k j (7.167)
It is easy to prove that for a message fbk g with statistically independent symbols, and
p D P[bk D 1], we have
sin2 .³ f T /
Pa e j2³ f T D 2 p.1 p/ (7.168)
p2 C .1 2 p/ sin2 ³ f T
Ð
The plot of Pa e j2³ f T is shown in Figure 7.41 for different values of p. Note that the
PSD presents a zero at f D 0. Also in this case ma D 0.
We observe that the AMI line code is a particular case of the partial response system
named dicode [6].
12 In the present analysis only M-PAM systems are considered; for M-QAM systems the results can be extended
to the signals on the I and Q branches.
588 Chapter 7. Transmission over dispersive channels
We assume that the transmission channel is ideal: the overall system can then be repre-
sented as an interpolator filter having impulse response
A noise signal w R .t/, obtained by filtering w.t/ by the receive filter, is added to the desired
signal. Sampling the received signal at instants t0 CkT yields the sequence fyk g, as illustrated
in Figure 7.43a. The discrete-time equivalent of the system is shown in Figure 7.43b, where
fh i D q.t0 C i T /g, and w R;k D w R .t0 C kT /. We assume that fh i g is equal to zero for i < 0
and i ½ N .
The partial response (PR) polynomial of the system is defined as
X
N 1
l.D/ D li D i (7.171)
i D0
where the coefficients fli g are equal to the samples fh i g, and D is the unit delay operator.
7.A. Line codes for PAM systems 589
ak (t)
ak rR (t) yk ^a
k
l(D) g
T t 0 +kT
w R (t)
X
C1 m
G f DT (7.172)
mD1 T
The symbols at the output of the filter l.D/ in Figure 7.44 are given by
X
N 1
ak.t/ D li aki (7.173)
i D0
Note that the overall scheme of Figure 7.44 is equivalent to that of Figure 7.43a with
X
N 1
q.t/ D li g.t i T / (7.174)
i D0
ž a filter with frequency response l.e j2³ f T /, periodic of period 1=T , that forces the
system to have an overall discrete-time impulse response equal to fh i g;
ž an analog filter g that does not modify the overall filter h.D/ and limits the system
bandwidth.
As it will be clear from the analysis, the decomposition of Figure 7.44, on one hand, allows
simplification of the study of the properties of the filter h.D/, and, on the other, to design
an efficient receiver.
The scheme of Figure 7.44 suggests two possible ways to implement the system of
Figure 7.42:
590 Chapter 7. Transmission over dispersive channels
1. Analog: the system is implemented in analog form; therefore the transmit filter h T x
and the receive filter g Rc must satisfy the relation
HT x . f / G Rc . f / D Q. f / D l.e j2³ f T / G. f / (7.175)
H.P R/ .P R/
T x . f / G Rc . f / D G. f / (7.176)
The implementation of a PR system using a digital filter is shown in Figure 7.45. Note
from (7.172) that in both relations (7.175) and (7.176) g is a Nyquist filter.
a) System bandwidth. With the aim of maximizing the transmission bit rate, many PR
systems are designed for minimum bandwidth, i.e. from (7.175) it must be
1
l.e j2³ f T / G. f / D 0 jfj> (7.177)
2T
Substitution of (7.177) into (7.172) yields the following conditions on the filter g:
8
< 1
T jfj F 1 t
G. f / D 2T ! g.t/ D sinc (7.178)
: T
0 elsewhere
Correspondingly, observing (7.174) the filter q assumes the expression
X
N 1
t iT
q.t/ D li sinc (7.179)
i D0
T
d) Number of output levels. From (7.173), the symbols at the output of the filter l.D/ have
an alphabet A.t/ of cardinality M .t/ . If we indicate with n l the number of coefficients of
l.D/ different from zero, then the following inequality for M .t/ holds
n l .M 1/ C 1 M .t/ M n l (7.180)
In particular, if the coefficients fli g are all equal, then M .t/ D n l .M 1/ C 1.
We note that, if l.D/ contains more than one factor .1 š D/, then n l increases and,
observing (7.180), also the number of output levels increases. If the power of the transmitted
signal is constrained, detection of the sequence fak.t/ g by a threshold detector will cause a
loss in system performance.
l.D/ Q.
Q f / for j f j 1=.2T / q.t/
Q M .t/
4T 2 cos.³t=T /
1C D 2T cos.³ f T / 2M 1
³ T 2 4t 2
8T t cos.³t=T /
1 D j 2T sin.³ f T / 2M 1
³ 4t 2 T 2
2T 2 sin.³t=T /
1 D2 j 2T sin.2³ f T / 2M 1
³ t2 T 2
2T 3 sin.³t=T /
1 C 2D C D 2 4T cos2 .³ f T / 4M 3
³t T 2 t 2
64T 3 t cos.³t=T /
1 C D D2 D3 j 4T cos.³ f T / sin.2³ f T / 4M 3
³ .4t 2 9T 2 /.4t 2 T 2 /
8T 3 sin.³t=T /
1 2D 2 C D 4 4T sin2 .2³ f T / 4M 3
³t t 2 4T 2
T2 3t T
2 C D D2 T C T cos.2³ f T / C j 3T sin.2³ f T / sin.³t=T / 2 4M 3
³t t T2
2
2T 2T 3t
2 D2 D4 T C T cos.4³ f T / C j 3T sin.4³ f T / sin.³t=T / 2 4M 3
³t t 4T 2
1.5
0.5
q(t)
−0.5
−1
−3 −2 −1 0 1 2 3 4 5
t/T
Figure 7.46. Plot of q.t/ for duobinary () and modified duobinary (- -) filters.
f) Transmitted signal spectrum. With reference to the PR system of Figure 7.45, the
spectrum of the transmitted signal is given by (see (7.17))
þ þ2
þ1 .P R/ þ
þ
Ps . f / D þ l.e j2³ f T
/ HT x . f /þþ Pa . f / (7.190)
T
product of the functions jl.e j2³ f T /j2 D j2 sin.2³ f T /j2 and jH.P R/
T x . f /j D T rect. f T /,
2 2
plotted with continuous lines. For the PAM system, the transmit filter h T x is a square root
raised cosine with roll-off factor ² D 0:5, and the spectrum is plotted with a dashed line.
The term l0 ak is the desired part of the signal s R;k , whereas the summation repre-
sents the ISI term that is often designated as “controlled ISI”, as it is deliberately intro-
duced.
The receiver detects the symbols fak g using the sequence of samples fyk D ak.t/ C w R;k g.
We discuss four possible solutions.13
1. LE-ZF. A zero-forcing linear equalizer (LE-ZF) having D transform equal to 1=l.D/
is used. At the equalizer output, at instant k the symbol ak plus a noise term is
13 For a first reading it is suggested that only solution 3 is considered. The study of the other solutions should
be postponed until the equalization methods of Chapter 8 are examined.
7.A. Line codes for PAM systems 595
Figure 7.48. Four possible solutions to the detection problem in the presence of controlled ISI.
obtained; the detected symbols faO k g are obtained by an M-level threshold detector,
as illustrated in Figure 7.48a. We note, however, that the amplification of noise by
the filter 1=l.D/ is infinite at frequencies f such that l.e j2³ f T / D 0.
The equation (7.194) shows that a wrong decision negatively influence successive
decisions: this phenomenon is known as error propagation.
3. Threshold detector with M .t/ levels. This solution, shown in Figure 7.48c, exploits
.t/
the M .t/ -ary nature of the symbols ak , and makes use of a threshold detector with
.t/
M levels followed by a LE-ZF. This structure does not lead to noise amplification
as solution 1, because the noise is eliminated by the threshold detector; however,
there is still the problem of error propagation.
Solution 2 using the DFE is often adopted in practice: in fact it avoids noise amplification
and is simpler to implement than the Viterbi algorithm. However, the problem of error
propagation remains.
In this case, using (7.194) the error probability can be written as
"þþ þ #
1 X
N 1 þ
þ þ
Pe D 1 P þw R;k C li eki þ > l0 (7.195)
M þ i D1
þ
A lower bound Pe;L can be computed for Pe by assuming the error propagation is absent,
or setting fek g D 0, 8k, in (7.195). If we denote by ¦w R the standard deviation of the noise
w R;k , we obtain
1 l0
Pe;L D 2 1 Q (7.196)
M ¦w R
Assuming w R;k white noise, an upper bound Pe;U is given in [7] in terms of Pe;L :
M N 1 Pe;L
Pe;U D (7.197)
.M=.M 1// Pe;L .M N 1 1/ C 1
From (7.197) we observe that the effect of the error propagation is that of increasing the
error probability by a factor M N 1 with respect to Pe;L .
A solution to the problem of error propagation is represented by precoding, which will
be investigated in depth in Chapter 13.
Precoding
We make use here of the following two simplifications:
1. the coefficients fli g are integer numbers;
2. the symbols fak g belong to the alphabet A D f0; 1; : : : ; M 1g; this choice is made
because arithmetic modulo M is employed.
. p/
We define the sequence of precoded symbols faN k g as:
!
. p/
X
N 1
. p/
aN k l0 D ak li aN ki mod M (7.198)
i D1
We note that (7.198) has only one solution if and only if l0 and M are relatively prime [8].
In case l0 D Ð Ð Ð D l j1 D 0 mod M, and l j and M are relatively prime, (7.198) becomes
!
. p/
X
N 1
. p/
aN k j l j D ak li aN ki mod M (7.199)
i D jC1
. p/
Applying the PR filter to faN k g we obtain the sequence
.t/
X
N 1
. p/
ak D li aN ki (7.200)
i D0
From the comparison between (7.198) and (7.200), or in general (7.199), we have the
fundamental relation
ak.t/ mod M D ak (7.201)
Equation (7.201) shows that, as in the absence of noise we have yk D ak.t/ , the symbol
ak can be detected by considering the received signal yk modulo M; this operation is
memoryless, therefore the detection of aO k is independent of the previous detections faO ki g,
i D 1; : : : ; N 1. Therefore the problem of error propagation is solved. Moreover, the
desired signal is not affected by ISI.
If the instantaneous transformation
. p/ . p/
ak D 2aN k .M 1/ (7.202)
. p/
is applied to the symbols faN k g, then we obtain a sequence of symbols that belong to the
. p/
alphabet A. p/ in (7.169). The sequence fak g is then input to the filter l.D/. Precoding
consists of the operation (7.198) followed by the transformation (7.202).
However, we note that (7.201) is no longer valid. From (7.202), (7.200), and (7.198),
we obtain the new decoding operation, given by
!
ak.t/
ak D C K mod M (7.203)
2
where
X
N 1
K D .M 1/ li (7.204)
i D0
A PR system with precoding is illustrated in Figure 7.49. The receiver is constituted by
a threshold detector with M .t/ levels that provides the symbols faO k.t/ g, followed by a block
that realizes (7.203) and yields the detected data faO k g.
If we assume that the cardinality of the set A.t/ is maximum, i.e. M .t/ D M n l , then the
output levels are equally spaced and the symbols ak.t/ result equally likely with probability
1
P[ak.t/ D Þ] D Þ 2 A.t/ (7.205)
M nl
In general, however, the symbols fak.t/ g are not equiprobable, because several output levels
are redundant, as can be deduced from the following example.
. p/
The symbols fak g are obtained from (7.202),
. p/ . p/
ak D 2aN k 1 (7.207)
. p/
they are antipodal as ak D f1; C1g. Finally, the symbols at the output of the filter l.D/
are given by
. p/ . p/ . p/ . p/
ak.t/ D ak ak1 D 2 aN k aN k1 (7.208)
. p/ . p/
The values of aN k1 , ak , aN k and ak.t/ are given in Table 7.3. We observe that both output
levels š2 correspond to the symbol ak D 1 and therefore are redundant; the three levels
are not equally likely. The symbol probabilities are given by
P[ak.t/ D š2] D 1
4
(7.209)
P[ak.t/ D 0] D 1
2
Figure 7.50a shows the precoder that realizes equations (7.206) and (7.207). The decoder,
realized as a map that associates the symbol aO k D 1 to š2, and the symbol aO k D 0 to 0, is
illustrated in Figure 7.50b.
0 0 0 0
0 1 1 C2
1 0 1 0
1 1 0 2
7.A. Line codes for PAM systems 599
(p) (p)
ak ak 0
← -1 a k
←
1 +1
(a) precoder
←
^a (t) 0 0 ^a
k ← k
2 1
(b) decoder
Figure 7.50. Precoder and decoder for a dicode filter l.D/ with M D 2.
yk D ak C wQ A;k (7.210)
. p/
b) Duobinary signal with precoding. The transmitted signal is now given by ak.t/ D ak C
. p/ . p/
ak1 2 f2; 0; 2g, where ak 2 f1; 1g is given by (7.202) and (7.198).
The received signal is given by
.t/
yk D ak C wQ B;k (7.212)
Pbit D P[aO k 6D ak ]
where ¦w2Q C D 2¦ I2 . We consider using a receiver that applies MLSD to recover the data;
from Example 8.12.1 on page 687 it results in
p !
8 1
Pbit D K Q DKQ (7.214)
2¦wQ C ¦I
where K is a constant.
We note that the PR system employing MLSD at the receiver achieves a performance
similar to that of a system transmitting antipodal signals, as MLSD exploits the correlation
between symbols of the sequence fak.t/ g.
where ¦w2Q D D 2¦ I2 .
An attempt at pre-equalizing the signal at the transmitter by inserting a filter l.D/ D
1=.1 C D/ D 1 D C D 2 C Ð Ð Ð would yield symbols ak.t/ with unlimited amplitude;
therefore such a configuration cannot be used. Equalization at the receiver using the scheme
of Figure 7.48a would require a filter of the type 1=.1 C D/, which would lead to unlimited
noise enhancement.
Therefore we resort to the scheme of Figure 7.48c, where the threshold detector has
thresholds set at š1. To avoid error propagation, we precode the message and transmit the
. p/
sequence fak g instead of fak g. At the receiver we have
. p/ . p/
yk D ak C ak1 C wQ D;k (7.216)
We consider now the application of the MAP criterion to an M-PAM system, where
Þn D 2n 1 M n D 1; : : : ; M (7.221)
The decision regions fRn g, n D 1; : : : ; M, are formed by intervals, or, in general, by the
union of intervals, whose boundary points are called decision thresholds f−i g,
i D 1; : : : ; M 1.
ρ
τ τ τ
1 2 3
Figure 7.51. Optimum thresholds for a 4-PAM system with non-equally likely symbols.
We point out that, if the probability that the symbol ` is sent is very small, p` − 1, the
measure of the corresponding decision interval could be equal to zero, and consequently
this symbol would never be detected. In this case the decision thresholds will be fewer
than M 1.
For a M-PAM system with thresholds −1 ; −2 , and −3 , the probability of correct decision is
given by (6.18):
X4 Z
P[C] D pn pw .² h 0 Þn / d²
nD1 Rn
Z −1 Z −2
D p1 pw .² h 0 Þ1 / d² C p2 pw .² h 0 Þ2 / d²
1 −1
Z Z (7.227)
−3 C1
C p3 pw .² h 0 Þ3 / d² C p4 pw .² h 0 Þ4 / d²
−2 −3
−i D h 0 .2i M/ i D 1; : : : ; M 1 (7.228)
yk D h 0 ak C ik C w R;k (7.230)
and w R;k is Gaussian noise with statistical power ¦ 2 and statistically independent of the
i.i.d. symbols of the message fak g.
We examine various methods to compute the symbol error probability in the presence
of ISI.
Exhaustive method
We refer to the case of 4-PAM transmission with Ni D 2 interferers due to one non-zero
precursor and one non-zero postcursor. Therefore we have
ik D i.a0k / (7.234)
Starting from (7.230) the error probability can be computed by conditioning with respect
to the values assumed by a0k D [Þ .1/ ; Þ .2/ ] D α 2 A2 . For equally likely symbols and
thresholds given by (7.228) we have
1 X h 0 i.a0k /
Pe D 2 1 Q P[a0k D α]
M ¦
α2A2
(7.235)
1 1 X h 0 i.α/
D2 1 Q
M L ¦
α2A 2
This method gives the exact value of the error probability in the presence of interferers, but
requires the computation of L terms. This method can be costly, especially if the number
of interferers is large: it is therefore convenient to consider approximations of the error
probability obtained by simpler computational methods.
Gaussian approximation
If interferers have a similar amplitude and their number is large, we can use the central
limit theorem and approximate ik as a Gaussian random variable. As the process w R;k is
Gaussian, the process
z k D ik C w R;k (7.236)
is also Gaussian with variance
¦z2 D ¦i2 C ¦ 2 (7.237)
where ¦i2 is given by (7.72). Then
1 h0
Pe D 2 1 Q (7.238)
M ¦z
It is seen that this method, although very convenient, is rather pessimistic, especially for
large values of 0. As a matter of fact, we observe that the amplitude of ik is limited by
the value
X
imax D .M 1/ jh i j (7.239)
i 6D0
whereas the Gaussian approximation implies that the values of ik are unlimited.
Worst-case bound
This method substitutes ik with the constant imax defined in (7.239). In this case Pe is
equal to
1 h 0 imax
Pe D 2 1 Q (7.240)
M ¦
This bound is typically too pessimistic, however, it yields a good approximation if ik is
mainly due to one dominant interferer.
606 Chapter 7. Transmission over dispersive channels
Saltzberg bound
With reference to (7.230), defining z k as the total disturbance given by (7.236), in general
we have
1
Pe D 2 1 P[z k > h 0 ] (7.241)
M
Let
in the specific case, and I be any subset of the integers Z 0 , excluding zero, such that
X h0
jh i j < (7.243)
i 2I
Þmax
The bound is particularly simple in the case of binary signaling, where fak g 2 f1; 1g,
0 !2 1
X
B h0 jh i j C
B C
B i 2I C
B
Pe < exp B 0 1C (7.245)
C
B X C
@ 2 @¦ 2 C jh i j2A A
i 2I C
P
where I is such that i 2I jh i j < h 0 . In this case it is rather simple to choose the set I so
that the limit is tighter. We begin with I D Z 0 . Then we remove from I one by one the
indices i that correspond to the larger values of jh i j; we stop when the exponent of (7.245)
has reached the minimum. Considering the limit of the function Q given by (6.364), we
observe that for I D Z 0 and I C D ;, the bound in (7.244) practically coincides with the
worst-case limit in (7.240). Taking instead I D ; and I C D Z 0 we obtain again the limit
given by the Gaussian approximation for z k that yields (7.238).
For the mathematical details we refer to [10]; for a comparison between the Saltzberg
bound and other bounds we refer to [5, 11].
7.B. Computation of Pe for some cases of interest 607
GQR method
The GQR method is based on a technique for the approximate computation of integrals
called Gauss quadrature rule (GQR). It offers a good compromise between computational
complexity and approximation accuracy.
If we assume a very large number of interferers, to the limit infinite, ik can be modelled
as a continuous random variable. Then Pe assumes the expression
Z C1
1 h0 ¾ 1
Pe D 2 1 Q pik .¾ / d¾ D 2 1 I (7.246)
M 1 ¦ M
In this expression the parameters f¾ j g and fw j g are called, respectively, abscissae and
weights of the quadrature rule, and are obtained by a numerical algorithm based on the first
2Nw moments of ik . The quality of the approximation depends on the choice of Nw [5].
608 Chapter 7. Transmission over dispersive channels
General scheme
For transmission over a passband channel, a PAM signal must be suitably shifted in fre-
quency by a sinusoidal carrier at frequency f 0 . This task is achieved by DSB modulation
(see Example 1.6.3 on page 41) of the signal s.t/ at the output of the baseband PAM
modulator filter.
In the case of a coherent receiver, the passband scheme is given in Figure 7.52. For the
baseband equivalent model, we refer to Figure 7.53a.
Now we consider the study of the PAM-DSB transmission system in the unified frame-
work of Figure 7.12. Assuming the receive filter g Rc real-valued, we apply the opera-
tor Re [ ] to the channel filter impulse response and to the noise signal, and we split
the factor 1=2 evenly among the channel filter and the receive filter responses; setting
g Rc .t/ D g Rc .t/ p1 , we thus obtain the simplified scheme of Figure 7.53b, where the noise
2
signal contains only the in-phase component w0I .t/ with PSD
N0
Pw0I . f / D (V2 /Hz) (7.248)
2
and
" .bb/
#
e j .'1 '0 / gCh .t/
gC .t/ D Re p (7.249)
2 2
Signal-to-noise ratio
We assume the function
.bb/
e j .'1 '0 / gCh .t/
p (7.254)
2 2
is real-valued; then from Figure 7.53a, using (1.295), we have the following relation:
.bb/
E[jsCh .t/j2 ]
E[jsC .t/j2 ] D D E[jsCh .t/j2 ] (7.255)
2
610 Chapter 7. Transmission over dispersive channels
Setting
M2 1
Ma D (7.258)
3
In the absence of ISI, for defined in (7.106), (7.107) still holds; moreover, using
(7.257), for a matched filter receiver, (7.113) yields
E qC 20
M F D D (7.259)
N0 =2 Ma
Then the error probability is given by
r !
1 60
Pe D 2 1 Q (7.260)
M M2 1
We observe that the performance of an M-PAM-DSB system and that of an M-PAM system
are the same, in terms of Pe as a function of the received power. However, because of
DSB modulation, the required bandwidth is doubled with respect to both baseband PAM
transmission and PAM-SSB modulation.14 This explains the limited usage of PAM-DSB
for digital transmission.
14 The PAM-SSB scheme presents in practice considerable difficulties because the filter for modulation is non-
ideal: in fact, this causes distortion of the signal s.t/ at low frequencies that may be compensated for only by
resorting to line coding (see Appendix 7.A).
7.D. Implementation of a QAM transmitter 611
Three structures, which differ by the position of the digital-to-analog converter, may be
considered for the implementation of a QAM transmitter. In Figure 7.54 the modulator
employs for both in-phase and quadrature signals a DAC after the interpolator filter h T x ,
followed by an analog mixer that shifts the signal to passband. This scheme works if the
sampling frequency 1=Tc is much greater than twice the bandwidth B of h T x .
For applications where the symbol rate is very high, the DAC is placed right after the
bit mapper and the various filters are analog (see Chapter 19).
In the implementation illustrated in Figure 7.55, the DAC is placed instead at an in-
termediate stage with respect to the case of Figure 7.54. Samples are premodulated by a
digital mixer to an intermediate frequency f 1 , interpolated by the DAC and subsequently
remodulated by a second analog mixer that shifts the signal to the desired band. The inter-
mediate frequency f 1 must be greater than the bandwidth B and smaller than 1=.2Tc / B,
thus avoiding overlap among spectral components. We observe that this scheme requires
only one DAC, but the sampling frequency must be at least double as compared to the
previous scheme.
For the first implementation, as the system is typically oversampled with a sampling
interval Tc D T =4 or Tc D T =8, the frequency response of the DAC, G I . f /, may be
considered as a constant in the passband of both the in-phase and quadrature signals. For
the second implementation, unless f 1 − 1=Tc , the distortion introduced by the DAC should
be considered and equalized by one of these methods (see page 338):
ž including the compensation for G I . f / in the frequency response of the filter h T x ,
ž inserting a digital filter before the DAC,
ž inserting an analog filter after the DAC.
We recall that an efficient implementation of interpolator filters h T x is obtained by the
polyphase representation, as shown in Figure 7.56 for Tc D T =8, where
T
h .`/ .m/ D h T x mT C ` ` D 0; 1; : : : ; 7 m D 1; : : : ; C1 (7.261)
8
To implement the scheme of Figure 7.56, once the impulse response is known, it may be
convenient to precompute the possible values of the filter output and store them in a table or
RAM. The symbols fak;I g are then used as pointers for the table itself. The same approach
may be followed to generate the values of the signals cos.2³ f 1 nTc / and sin.2³ f 1 nTc / in
Figure 7.55, using an additional table and the index n as a cyclic pointer.
7.E. Simulation of a QAM system 613
In Figure 7.12 we consider the baseband equivalent scheme of a QAM system. The aim is
to simulate the various transformations in the discrete-time domain and to estimate the bit
error probability.
This simulation method, also called Monte Carlo, is simple and general because it does
not require any special assumption on the processes involved; however, it is intensive from
the computational point of view. For alternative methods, for example semi-analytical, to
estimate the error probability, we refer to specific texts on the subject [12].
We describe the various transformations in the overall discrete-time system depicted in
Figure 7.57, where the only difference with respect to the scheme of Figure 7.12 is that the
Figure 7.57. Baseband equivalent model of a QAM system with discrete-time filters and
sampling period TQ D T=Q0 . At the receiver, in addition to the general scheme, a multirate
structure to obtain samples of the received signal at the timing phase t0 is also shown.
614 Chapter 7. Transmission over dispersive channels
Bit mapper. The bit mapper maps patterns of information bits to symbols; the symbol
constellation depends on the modulator (see Figure 7.6 for two constellations).
where typically w Nh is the discrete-time rectangular window or the Hamming window, and
h i d is the ideal impulse response.
Frequency responses of h T x are illustrated in Figure 7.58 for h i d square root raised
cosine pulse with roll-off factor ² D 0:3, and w Nh rectangular window of length Nh , for
various values of Nh (TQ D T =4). The corresponding impulse responses are shown in
Figure 7.59.
Transmission channel. For a radio channel the discrete-time model of Figure 4.35 can be
used, where in the case of channel affected by fading, the coefficients of the FIR filter that
model the channel impulse response are random variables with a given power delay profile.
For a transmission line the discrete-time model of (4.150) can be adopted.
We assume the statistical power of the signal at output of the transmission channel is
given by MsCh D MsC .
N = 17
0 h
N = 25
h
N = 33
h
−10
−20
| HT (f) | (dB)
−30
x
−40
−50
−60
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
fT
Figure 7.58. Magnitude of the transmit filter frequency response, for a windowed square root
raised cosine pulse with roll-off factor ² D 0:3, for three values of Nh (TQ D T=4).
0.3
Nh=17
h (q T )
0.2
Q
0.1
Tx
−0.1
−5 0 5 10 15 20
q / TQ
0.3
Nh=25
h (q T )
0.2
Q
0.1
Tx
−0.1
0 5 10 15 20 25
q / TQ
0.3
Nh=33
h (q T )
0.2
Q
0.1
Tx
−0.1
0 5 10 15 20 25 30
q / TQ
Figure 7.59. Transmit filter impulse response, fhTx .qTQ /g, q D 0; : : : ; Nh1 , for a windowed
square root raised cosine pulse with roll-off factor ² D 0:3, for three values of Nh (TQ D T=4).
616 Chapter 7. Transmission over dispersive channels
where
1
¦w2 C D N0 (7.264)
TQ
Usually the signal-to-noise ratio 0 given by (6.105) is given. For a QAM system, from
(7.51) and (7.55) we have
MsC MsC
0D D 2 (7.265)
N0 .1=T / ¦wC .TQ =T /
The standard deviation of the noise to be inserted in (7.263) is given by
r
MsC Q 0
¦wC D (7.266)
0
We note that ¦wC is a function of MsC , of the oversampling ratio Q 0 D T =TQ , and of the
given ratio 0. In place of 0, the ratio E b =N0 D 0= log2 M may be assigned.
Receive filter. As will be discussed in Chapter 8, there are several possible solutions for
the receive filter. The most common choice is a matched filter g M , matched to h T x , of
the square root raised cosine type. Alternatively, the receive filter may be a simple anti-
aliasing FIR filter g A A , with passband at least equal to that of the desired signal. The filter
attenuation in the stopband must be such that the statistical power of the noise evaluated in
the passband is larger by a factor of 5–10 with respect to the power of the noise evaluated
in the stopband, so that we can ignore the contribution of the noise in the stopband at the
output of the filter g A A .
If we adopt as bandwidth of g A A the Nyquist frequency 1=.2T /, the stopband of an ideal
filter with unit gain goes from 1=.2T / to 1=.2TQ /: therefore the ripple Žs in the stopband
must satisfy the constraint
1
N0 2T
> 10 (7.267)
Žs N0 2T1 2T
1
Q
Interpolator filter. The interpolator filter is used to increase the sampling rate from 1=TQ
to 1=TQ0 : this is useful when TQ is insufficient to obtain the accuracy needed to represent
the timing phase t0 . This filter can be part of g M or g A A . From Appendix 1.A, the efficient
implementation of fg M . pTQ0 /g is obtained by the polyphase representation with TQ =TQ0
branches.
To improve the accuracy of the desired timing phase, further interpolation, for example,
linear, may be employed.
7.E. Simulation of a QAM system 617
Timing phase. Assuming a training sequence is available, for example, of the PN type
fa0 D p.0/; a1 D p.1/; : : : ; a L1 D p.L 1/g, a simple method to determine t0 is to
choose the timing phase in relation to of the peak of the overall impulse response. Let
fx. pTQ0 /g be the signal before downsampling. If we evaluate
m opt D arg max jrxa .mTQ0 /j
m
þ þ (7.269)
þ1 X
L1 þ
þ þ
D arg max þ x.`T C mTQ / p .`/þ
0 Ł
m min TQ0 < mTQ0 < m max TQ0
m þL þ
`D0
then
t0 D m opt TQ0 (7.270)
In (7.269) m min TQ0 and m max TQ0 are estimates of minimum and maximum system delay,
respectively. Moreover, we note that the accuracy of t0 is equal to TQ0 and that the amplitude
of the desired signal is h 0 D rxa .m opt TQ0 /=ra .0/.
Equalizer. After downsampling, the signal is usually input to an equalizer (LE, FSE or
DFE, see Chapter 8). The output signal of the equalizer has always sampling period equal
to T . As observed several times, to decimate simply means to evaluate the output at the
desired instants.
Data detection. The simplest method resorts to a threshold detector, with thresholds de-
termined by the constellation and the amplitude of the pulse at the decision point.
Inverse bit mapper. The inverse bit mapper performs the inverse function of the bit map-
per. It translates the detected symbols into bits that represent the recovered information bits.
Simulations are typically used to estimate the bit error probability of the system, for a
certain set of values of 0. We recall that caution must be taken at the beginning and at the
end of a simulation to consider transients of the system. Let KN be the number of recovered
bits. The estimate of the bit error probability Pbit is given by
number of bits received with errors
PObit D (7.271)
number of received bits, KN
618 Chapter 7. Transmission over dispersive channels
For example, we find that with Pbit D 10` and KN D 10`C1 , we have a confidence interval
of about a factor 2 with a probability of 95%, that is P[1=2Pbit PObit 2Pbit ] ' 0:95.
This is in good agreement with the experimental rule of selecting
KN D 3 Ð 10`C1 (7.273)
For a channel affected by fading, the average Pbit is not very significant: in this case it
is meaningful to compute the distribution of Pbit for various channel realizations. In pratice
we assume the transmission of a sequence of N p packets, each one with KN p information bits
to be recovered: typically KN p D 1000–10000 bits and N p D 100–1000 packets. Moreover,
the channel realization changes at every packet. For a given average signal-to-noise ratio
0N (see (6.347)), the probability PObit .n p /, n p D 1; : : : ; N p is computed for each packet. As
a performance measure we use the percentage of packets with PObit .n p / < Pbit , also called
bit error probability cumulative distribution function (cdf), where Pbit assumes values in a
certain set.
This performance measure is more significant than the average Pbit evaluated for a very
long, continuous transmission of N p KN p information bits. In fact the average Pbit does not
show that, in the presence of fading, the system may occasionally have a very large Pbit ,
and consequently an outage.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 8
Channel equalization
and symbol detection
With reference to PAM and QAM systems, in this chapter we will discuss several methods
to compensate for linear distortion introduced by the transmission channel. Next, as an
alternative to a memoryless threshold detector, we will analyze detection methods that
operate on sequences of samples.
Recalling the analysis of Section 7.3, we first review three techniques relying on the
zero-forcing filter, linear equalizer, and DFE, respectively, that attempt to reduce the ISI in
addition to maximizing the ratio defined in (7.106).
From (8.2), the magnitude and phase responses of GRc can be obtained. In practice, however,
although the condition (8.2) leads to the suppression of the ISI, hence the filter gRc is called
linear equalizer zero-forcing (LE-ZF), it may also lead to the enhancement of the noise
power at the decision point, as expressed by (7.75).
In fact, if the frequency response GC . f / exhibits strong attenuation at certain frequencies
in the range [.1 C ²/=.2T /; .1 C ²/=.2T /], then GRc . f / presents peaks that determine a
large value of ¦w2 R . In any event, the choice (8.2) guarantees the absence of ISI at the
decision point, and from (7.109) we get
2
LEZF D (8.3)
N0 E gRc
620 Chapter 8. Channel equalization and symbol detection
a QAM system. Moreover, from (8.2), neglecting a constant delay, i.e. for t0 D 0, it
results that
s
1 f
GRc . f / D rcos ;² (8.7)
k1 1=T
In other words gRc .t/ is matched to qC .t/ D k1 h Tx .t/, and
LEZF D MF (8.8)
Methods for the design of a LE-ZF filter with a finite number of coefficients are given
in Section 8.7.
1 It would be desirable to find the filter such that P[aO k 6D ak ] is minimum. This problem, however, is usually
very difficult to solve. Therefore we resort to the criterion of minimizing E[jyk ak j2 ] instead.
8.2. Linear equalizer (LE) 621
1. the sequence fak g is wide sense stationary (WSS) with spectral density Pa . f /;
2. the noise wC is complex-valued and WSS. In particular we assume it is white with
spectral density PwC . f / D N0 ;
3. the sequence fak g and the noise wC are statistically independent.
The minimization of J in this situation differs from the classical problem of the optimum
Wiener filter because h Tx and gC are continuous-time pulses. By resorting to the calculus
of variations (see Appendix 8.A), we obtain the general solution
Ł. f/
QC Pa . f /
GRc . f / D e j2³ f t0 þ þ (8.11)
N0 1 X
C1
1 þþ ` þþ2
T C Pa . f / Q f
N0 þ T þ
C
T `D1
The expression of the cost function J in correspondence of the optimum filter (8.12) is
given in (8.40).
From the decomposition (7.62) of GRc . f /, in (8.12) we have the following corres-
pondences:
G M . f / D QC
Ł
. f /e j2³ f t0 (8.13)
and
¦a2
C.e j2³ f T / D C1 þþ þ (8.14)
1 X þ ` þþ2
N0 C ¦a
2 QC f
T `D1 þ T þ
The optimum receiver thus assumes the structure of Figure 8.1. We note that g M is the
matched filter to the impulse response of the QAM system at the receiver input.2
The filter c is called linear equalizer (LE). It attempts to find the optimum trade-off
between removing the ISI and enhancing the noise at the decision point.
2 As derived later in the text (see Observation 8.13 on page 681) the output signal of the matched filter, sampled
at the modulation rate 1=T , forms a “sufficient statistic” if all the channel parameters are known.
622 Chapter 8. Channel equalization and symbol detection
Figure 8.1. Optimum receiver structure for a channel with additive white noise.
Note that the system is perfectly equalized, i.e. there is no ISI. In this case the filter
(8.15) is the linear equalizer zero-forcing, as it completely eliminates the ISI.
2. In the absence of ISI at the output of g M , that is if jQC . f /j2 is a Nyquist pulse, then
C.e j2³ f T / is constant and the equalizer can be removed.
We notice the presence of the parameter D that denotes the lag of the desired signal:
this parameter, which must be suitably estimated, expresses in number of symbol intervals
the delay introduced by the equalizer. The overall delay from the emission of ak to the
generation of the detected symbol aO k is equal to t0 C DT seconds.
However, the particular case of a matched filter, for which g M .t/ D qCŁ ..t t0 //, is
very interesting from a theoretical point of view. We assume that the filter c may have
an infinite number of coefficients, i.e. it may be IIR. With reference to the scheme of
Figure 8.2a, q is the overall impulse response of the system at the sampler input:
The discrete-time equivalent model is illustrated in Figure 8.2b. The discrete-time overall
impulse response is given by
h n D q.t0 C nT / D rqC .nT / (8.23)
In particular, it results in
h 0 D rqC .0/ D E qC (8.24)
The sequence fh n g has z-transform given by
8.z/ D Z[h n ] D PqC .z/ (8.25)
which, by the Hermitian symmetry of an autocorrelation sequence, rqC .nT / D rqŁC .nT /,
satisfies the relation:
1
8.z/ D 8 Ł Ł (8.26)
z
On the other hand, from (1.90), the Fourier transform of (8.23) is given by
C1 þþ þ2
1 X þQC f ` þ
þ
8.e j2³ f T / D F[h n ] D þ (8.27)
T `D1 T þ
Moreover, using the properties of Table 1.3, the correlation sequence of fh n g has z-transform
equal to
1
Z[rh .m/] D 8.z/8 Ł Ł (8.28)
z
Also, from (8.20), the z-transform of the autocorrelation of the noise samples wQ k D w R .t0 C
kT / is given by:
Z[rwQ .n/] D Z[rw R .nT /] D N0 8.z/ (8.29)
The Wiener solution that gives the optimum coefficients is given in the z-transform
domain by (2.50):
Pdx .z/
Copt .z/ D Z[c n ] D (8.30)
Px .z/
where
Pdx .z/ D Z[r dx .n/] and Px .z/ D Z[r x .n/] (8.31)
8.2. Linear equalizer (LE) 625
1. The sequence fak g is WSS, with symbols that are statistically independent and with
zero mean,
Under the same assumptions 1 and 2, the computation of the autocorrelation of the
process fx k g yields (see also Table 1.6):
¦a2 z D
Copt .z/ D (8.38)
N0 C ¦a2 8.z/
626 Chapter 8. Channel equalization and symbol detection
It can be observed that, for z D e j2³ f T , (8.38) corresponds to (8.14), apart from the term
z D , which accounts for a possible delay introduced by the equalizer.
In relation to the optimum filter C opt .z/, we determine the minimum value of the cost
function. We recall the general expression for the Wiener filter (2.53):
X
N 1
Jmin D ¦d2 copt;i rŁdx .i/
i D0
(8.39)
Z 1
2T
D ¦d2 1
Pdx
Ł
. f / C opt .e j2³ f T / d f
2T
Signal-to-noise ratio γ
We define the overall impulse response at the equalizer output, sampled with a sampling
rate equal to the modulation rate 1=T , as
i D .h n Ł copt;n /i (8.42)
where fh n g is given by (8.23) and copt;n is the impulse response of the optimum filter (8.38).
At the decision point we have
X
C1
yk D D akD C i akDi C .e
wn Ł copt;n /k (8.43)
i D1
i 6D D
We assume that in (8.43) the total disturbance given by ISI plus noise is modeled as
Gaussian noise with variance 2¦ I2 . Hence, for a minimum distance among symbols of the
constellation equal to 2, (7.106) yields
2
D
L E D (8.44)
¦I
8.3. LE with a finite number of coefficients 627
In case the approximation D ' 1 holds, the total disturbance in (8.43) coincides with
ek , hence 2¦ I2 ' Jmin , and (8.44) becomes
2
L E ' (8.45)
Jmin
where Jmin is given by (8.40).
First solution. The classical block diagram of an adaptive receiver is shown in Figure 8.3.
The matched filter g M is designed assuming an ideal channel. Therefore the equalization
task is left to the filter c; otherwise, if it is possible to rely on some a priori knowledge
of the channel, the filter g M may be designed according to the average characteristics of
the channel. The filter c is then an adaptive transversal filter that attempts, in real time, to
equalize the channel by adapting its coefficients to the channel variations.
Figure 8.3. Receiver implementation by an analog matched filter followed by a sampler and
a discrete-time linear equalizer.
is considered as a wideband signal, g A A should also attenuate the noise components outside
the passband of the desired signal sC , hence the cut-off frequency of gAA is between B and
F0 =.2T /. In practice, to simplify the implementation of the filter gAA , it is convenient to
consider a wide transition band.
Thus the discrete-time filter c needs to accomplish the following tasks:
1. to filter the residual noise outside the passband of the desired signal sC ;
2. to act as a matched filter;
3. to equalize the channel.
Note that the filter c of Figure 8.4 is implemented as a decimator filter (see Appendix 1.A),
where the input signal xn D x.t0 C nTc / is defined over a discrete-time domain with period
Tc D T =F0 , and the output signal yk is defined over a discrete-time domain with period T .
In turn, two strategies may be used to determine an equalizer filter c with N coefficients:
1. the direct method, which employs the Wiener formulation and requires the compu-
tation of the matrix R and the vector p. The description of the direct method is
postponed to Section 8.5 (see Observation 8.2 on page 641);
2. the adaptive method, which we will describe next (see Chapter 3).
Adaptive LE
We analyze now the solution illustrated in Figure 8.3: the discrete-time equivalent scheme
is illustrated in Figure 8.5, where fh n g is the discrete-time impulse response of the overall
system, given by
and
wQ k D w R .t0 C kT / (8.47)
2. Select the law of coefficient update. For example, for an FIR filter c with N coefficients
using the LMS algorithm (see Section 3.1.2) we have
where
a) input vector
b) coefficient vector
c) adaptation gain
2
0<¼< (8.52)
N rx .0/
3. To evaluate the signal error ek to be used in the adaptive algorithm we distinguish two
modes.
a) Training mode
ek D akD yk k D D; : : : ; L TS C D 1 (8.53)
Figure 8.6. Linear adaptive equalizer implemented as a transversal filter with N coefficients.
ek D aO kD yk k ½ L TS C D (8.54)
Once the transmission of the TS is completed, we assume that the equalizer has arrived
at convergence. Therefore aO k ' ak , and the transmission of information symbols may
start. In (8.53) we then substitute the known transmitted symbol with the detected
symbol to obtain (8.54).
h i D q.t0 C i Tc / (8.55)
where
~
w n
For the analysis, the filter c is decomposed into a discrete-time filter with sampling period
Tc that is cascaded with a downsampler.
The input signal to the filter c is the sequence fxn g with sampling period Tc D T =F0 ;
the n-th sample of the sequence is given by
X
C1
xn D h nk F0 ak C wQ n (8.58)
kD1
X
N 1
yn0 D ci xni (8.59)
i D0
We note that the overall impulse response at the filter output, defined on the discrete-time
domain with sampling period Tc , is given by
i D h Ł ci (8.60)
If we denote by fyn0 g the sequence of samples at the filter output, and by fyk g the
downsampled sequence, we have
yk D yk0 F0 (8.61)
X
C1
s R .t/ D an h.t nT / t 2< (8.62)
nD1
In Section 7.3 we considered continuous-time Nyquist pulses h.t/, t 2 <. Let h.t/ be defined
now on a discrete-time domain fnTc g, n integer, where Tc D T =F0 . If F0 is an integer, the
discrete-time pulse satisfies the Nyquist criterion if h.0/ 6D 0, and h.`F0 Tc / D 0, for all
integers ` 6D 0. In the particular case F0 D 1 we have Tc D T , and the Nyquist conditions
632 Chapter 8. Channel equalization and symbol detection
t=nTc 1 0 1 1 f
0 T 2T
2T 2T T
(a) F0 D 2.
0 T 2T t=nT 1 0 1 1 f
2T 2T T
(b) F0 D 1.
impose that h.0/ 6D 0, and h.nT / D 0, for n 6D 0. Recalling the input–output downsampler
relations in the frequency domain, it is easy to deduce the behavior of a discrete-time
Nyquist pulse in the frequency domain: two examples are given in Figure 8.8, for F0 D 2
and F0 D 1. We note that, for F0 D 1, in the frequency domain a discrete-time Nyquist
pulse is equal to a constant.
With reference to the scheme of Figure 8.7, the QAM pulse defined on the discrete-time
domain with period Tc is given by (8.55), where q.t/ is defined in (8.56). From (8.60),
using (8.55), the pulse f i g at the equalizer output before the downsampler has the following
Fourier transform:
The task of the equalizer is to yield a pulse f i g that approximates a Nyquist pulse, i.e.
a pulse of the type shown in Figure 8.8. We see that choosing F0 D 1, i.e. sampling the
equalizer input signal with period equal to T , it may happen that H .e j2³ f Tc / assumes very
small values at frequencies near f D 1=.2T /, because of an incorrect choice of the timing
phase t0 . In fact, let us assume q.t/ is real with a bandwidth smaller than 1=T . Using the
polar notation for Q. 2T 1
/ we have
1 1
Q D Ae j' and Q D Ae j' (8.64)
2T 2T
8.4. Fractionally spaced equalizer (FSE) 633
If t0 is such that
t0 2i C 1
'C³ D ³ i integer (8.66)
T 2
then
1
H .e j2³ 2T T / D 0 (8.67)
In this situation the equalizer will enhance the noise around f D 1=.2T /, or converge with
difficulty in the adaptive case. If F0 ½ 2 is chosen, this problem is avoided because aliasing
between replicas of Q. f / does not occur. Therefore the choice of t0 may be less accurate.
In fact, as we will see in Chapter 14, if c has an input signal sampled with sampling period
T =2 it also acts as an interpolator filter, whose output can be used to determine the optimum
timing phase.
In conclusion, the FSE receiver presents two advantages over T-spaced equalizers:
1. It is an optimum structure according to the MSE criterion, in the sense that it car-
ries out the task of both matched filter (better rejection of the noise) and equalizer
(reduction of ISI).
2. It is less sensible to the choice of t0 . In fact, the correlation method (7.269) with
accuracy TQ0 D T =2 is usually sufficient to determine the timing phase.
Adaptive FSE
The direct method to compute the coefficients of a FSE is described in Section 8.5 (see Ob-
servation 8.7 on page 644); we consider now the adaptive method as depicted in Figure 8.9.
The choice of the oversampling index F0 D 2 is very common. For this choice, the input
samples of the filter c have sampling period T =2, and the output samples have sampling
period T . Note that coefficient update takes place every T seconds. With respect to the basic
scheme of Figure 8.7, in a practical implementation the equalizer output is not generated at
every sampling instant multiple of T =2, but only at alternate sampling instants. The LMS
adaptation equation is given by:
The adaptive FSE may incur a difficulty in the presence of noise with variance that is
small with respect to the level of the desired signal: in this case some eigenvalues of the
autocorrelation matrix of xŁ2k may assume a value that is almost zero and consequently
634 Chapter 8. Channel equalization and symbol detection
yk
ek -
+
µ a k-D
the problem of finding the optimum coefficients become ill-conditioned, with numerous
solutions that present the same minimum value of the cost function. This effect can be
illustrated also in the frequency domain: outside the passband of the input signal the filter
c may assume arbitrary values, in the limit case of absence of noise. As a result, the
coefficients of the filter c may vary in time and also assume very large values.
To mitigate this problem, recalling the leaky LMS algorithm (see page 187), we consider
two methods that slightly modify the cost function. In both cases we attempt to impose a
constraint on the amplitude that the coefficients may assume.
then
ckC1 D ck C ¼.ek xŁ2k Þck /
(8.70)
D .1 ¼Þ/ck C ¼ek xŁ2k
2. Let
" #
X
N 1
J2 D J C Þ E jci j (8.71)
i D0
8.5. Decision feedback equalizer (DFE) 635
then
X
C1
sk D s R .t0 C kT / D ai h ki (8.73)
i D1
where the sampled pulse fh n g is defined in (8.46). In the presence of noise we have
x k D sk C wQ k (8.74)
In addition to the actual symbol ak that we desire to detect from the observation of x k ,
in (8.75) two terms are identified in parentheses: one that depends only on past symbols
ak1 ; : : : ; akN2 , and another that depends only on future symbols, akC1 ; : : : ; akCN1 .
If the past symbols and the impulse response fh n g were perfectly known, we could use
an ISI cancellation scheme limited only to postcursors. Substituting the past symbols with
their detected versions faO k1 ; : : : ; aO kN2 g, we obtain a scheme to cancel in part ISI, as
illustrated in Figure 8.11, where, in general, the feedback filter has impulse response fbn g,
n D 1; : : : ; M2 , and output given by
The general structure of a DFE is shown in Figure 8.12, where two filters and the
detection delay are outlined:
M
X 1 1
z k D xFF;k D ci x ki (8.77)
i D0
8.5. Decision feedback equalizer (DFE) 637
Figure 8.11. Simplified scheme of a DFE, where only the feedback filter is included.
M2
X
xFB;k D bi aki D (8.78)
i D1
Moreover,
We recall that for a LE the goal is to obtain a pulse f n g free of ISI, with respect to the
desired sample D . Now (see Figure 8.10) ideally the task of the feedforward filter is to
obtain an overall impulse response f n D h Ł cn g with very small precursors and a transfer
function Z[ n ] that is minimum phase (see Example 1.4.3). In this manner almost all the
ISI is cancelled by the FB filter. We note that the FF filter may be implemented as a FSE,
whereas the feedback filter operates with sampling period equal to T .
The choice of the various parameters depends on fh n g. The following guidelines, how-
ever, are usually observed.
1. M1 T =F0 (time-span of the FF filter) at least equal to .N1 C N2 C 1/T =F0 (time-span
of h), so the FF filter can effectively equalize;
638 Chapter 8. Channel equalization and symbol detection
2. M2 T (time-span of the FB filter) equal to or less than .M1 1/T =F0 (time-span of
the FF filter minus one); M2 depends also on the delay D, which determines the
number of postcursors.
3. For very dispersive channels, for which we have N1 C N2 × M1 , it results
M2 T × MF10T .
4. The choice of DT, equal to the detection delay, is obtained by initially choosing a
large delay, D .M1 1/ =F0 , to simplify the FB filter. If the precursors are not
negligible, to reduce the constraints on the coefficients of the FF filter the value of
D is lowered and the system is iteratively designed. In practice, DT is equal to or
smaller than .M1 1/T =F0 .
For a LE, instead, DT is approximately equal to N 21 FT0 ; the criterion is that the center
of gravity of the coefficients of the filter c is approximately equal to .N 1/=2.
The detection delays discussed above are referred to a pulse fh n g “centered” in the origin,
that does not introduce any delay.
Adaptive DFE
We consider the scheme implemented in Figure 8.13, where the output signal is given by
M
X 1 1 M2
X
yk D ci x ki C b j akD j (8.80)
i D0 jD1
yk D ζ T ξ k (8.83)
ek D aO kD yk (8.84)
J D E[jakD yk j2 ] (8.86)
the Wiener filter theory may be applied to determine the optimum coefficients of the DFE
filter in the case aO k D ak , with the usual assumptions of symbols that are i.d.d. and
statistically independent of the noise.
For a generic sequence fh i g in (8.73), we recall the following results:
2. autocorrelation of x k
where
N2
X
rh .n/ D h j h Łjn rwQ .n/ D N0 rg M .nT / (8.89)
jDN1
Defining
M
X 1 1
p D h Ł cp D c` h p` (8.90)
`D0
Observing (8.91), the optimum choice of the feedback filter coefficients is given by
bi D i CD i D 1; : : : ; M2 (8.92)
M2
X
[R] p;q D E x kq h j1 CDq akD j1
j1 D1
M2
X Ł ½
x k p h j2 CD p akD j2
j2 D1 (8.95)
!
N2
X M2
X
D ¦a2 h j h Łj. pq/ h jCDq h ŁjCD p C rwQ . p q/
jDN1 jD1
p; q D 0; 1; : : : ; M1 1
and, from (8.92), the optimum feedback filter coefficients are given by
M
X 1 1
bi D copt;` h i CD` i D 1; 2; : : : ; M2 (8.97)
`D0
M
X 1 1
Jmin D ¦a2 copt;` [p]Ł`
`D0
! (8.98)
M
X 1 1
D ¦a2 1 copt;` h D`
`D0
8.5. Decision feedback equalizer (DFE) 641
Observation 8.1
In the particular case in which all the postcursors are cancelled by the feedback filter, that
is for
M2 C D D N 2 C M1 1 (8.99)
(8.95) is simplified as
X
D
[R] p;q D ¦a2 h jq h Łj p C rwQ . p q/ (8.100)
jDN1
Observation 8.2
The equations to determine copt for a LE are identical to (8.94)–(8.98), with M2 D 0. In
particular, the vector p in (8.94) is not modified, while the expression of the elements of
the matrix R in (8.95) is modified by the terms including the detected symbols.
Observation 8.3
For white noise wC , the autocorrelation of wQ k is proportional to the autocorrelation of the
receive filter impulse response: consequently, if the statistical power of wQ k is known, the
autocorrelation rwQ .n/ is easily determined. Finally, the coefficients of the channel impulse
response fh n g and the statistical power of wQ k used in R and p can be determined by the
methods given in Appendix 3.B.
Observation 8.4
For a LE, the matrix R is Hermitian and Toeplitz, while for a DFE it is only Hermitian;
in any case it is (semi-)definite positive. Efficient methods to determine the inverse of the
matrix are described in Section 2.3.2.
Observation 8.5
The definition of fh n g depends on the value of t0 , which is determined by methods de-
scribed in Chapter 14. A particularly useful method to determine the impulse response
fh n g in wireless systems (see Chapter 18) resorts to a short training sequence to achieve
fast synchronization. We recall that a fine estimate of t0;M F is needed if the sampling
period of the signal at the MF output is equal to T . The overall discrete-time system
impulse response obtained by sampling the output signal of the anti-aliasing filter gAA
(see Figure 8.4) is assumed to be known, e.g. by estimation. The sampling period, for
example T =8, is in principle determined by the accuracy with which we desire to estimate
the timing phase t0;M F at the MF output. To reduce implementation complexity, however,
a larger sampling period of the signal at the MF input is considered, for example, T =2.
We then implement the MF g M by choosing, among the four polyphase components (see
Section 1.A.9 on page 119) of the impulse response, the component with largest energy,
thus realizing the MF criterion (see also (8.16)). This is equivalent to selecting among the
four possible components with sampling period T =2 of the sampled output signal of the
filter gAA the component with largest statistical power. This method is similar to the timing
estimator (14.117).
642 Chapter 8. Channel equalization and symbol detection
The timing phase t0;A A for the signal at the input of g M is determined during the esti-
mation of the channel impulse response. It is usually chosen either as the time at which the
first useful sample of the overall impulse response occurs, or the time at which the peak
of the impulse response occurs, shifted by a number of modulation intervals corresponding
to a given number of precursors. Note that, if t M F denotes the duration of g M , then the
timing phase at the output of g M is given by t0;M F D t0;A A C t M F . The criterion (7.269),
according to which t0 is chosen in correspondence of the correlation peak, is a particular
case of this procedure.
Observation 8.6
In systems where the training sequence is placed at the end of a block of data (see the
GSM frame in Appendix 17.A), it is convenient to process the observed signal fx k g starting
from the end of the block, let’s say from k D K 1 to 0, thus exploiting the knowledge of
the training sequence. Now if f n D h Ł cn g and fbn g are the optimum impulse responses
if the signal is processed in the forward mode, i.e. for k D 0; 1; : : : ; K 1, it is easy to
verify that f nBŁ g and fbnBŁ g, where B is the backward operator defined on page 27, are
the optimum impulse responses in the backward mode for k D K 1; : : : ; 1; 0, apart from
a constant delay. In fact, if f n g is ideally minimum phase and causal with respect to the
timing phase, now f nBŁ g is maximum phase and anticausal with respect to the new instant
of optimum sampling. Also the FB filter will be anticausal. In the particular case fh n g is a
correlation sequence, then f nBŁ g can be obtained using as FF filter the filter having impulse
response fcnBŁ g
Let
M
X 1 1
p D h Ł cp D c` h p` (8.103)
`D0
Using the following relations (see also Example 1.9.10 on page 72)
Ł
E[akK x2ki ] D ¦a2 h Ł2K i (8.106)
X
C1
p ] D ¦a h 2nq h Ł2n p C rwQ . p q/
Ł 2
E[x 2kq x2k (8.107)
nD1
the components of the vector p and the matrix R of the Wiener problem associated with
(8.105) are given by
R copt D p (8.111)
and the feedback filter is determined from (8.104). The minimum value of the cost function
is given by
M
X 1 1
Jmin D ¦a2 copt;` [p]Ł`
`D0
! (8.112)
M
X 1 1
D ¦a2 1 copt;` h 2D`
`D0
A problem encountered with this method is the inversion of the matrix R in (8.111),
because it may be ill-conditioned. Similarly to the procedure outlined on page 187, a
solution consists in adding a positive constant to the elements on the diagonal of R, so that
R becomes invertible; obviously the value of this constant must be rather small, so that the
performance of the optimum solution does not change significantly.
Observation 8.7
Observations similar to the observations 8.1–8.3 hold for a FS-DFE, with appropriate
changes. In this case the timing phase t0 after the filter gAA can be determined with accuracy
T =2, for example by the correlation method (7.269).
For an FSE, or FS-LE, the equations to determine copt are given by (8.109)–(8.111) with
M2 D 0. Note that the matrix R is Hermitian but in general it is no longer Toeplitz.
Observation 8.8
Two matrix formulations of the direct method to determine the coefficients of a DFE and an
FS-DFE are given in Appendix 8.B. In particular, a formulation uses the correlation of the
equalizer input signal fxn g, the correlation of the sequence fak g, and the cross-correlation
of the two signals: using suitable estimates of the various correlations (see the correlation
method and the covariance method considered in Section 2.3), this method avoids the need
for the estimate of the overall channel impulse response; however, it requires a greater
computational complexity with respect to the method described in this section.
Signal-to-noise ratio γ
Using FF and FB filters with an infinite number of coefficients, it is possible to achieve the
minimum value of Jmin . Salz derived the expression of Jmin for this case, given by [3]
0 1
Z 1
2T N0
Jmin D ¦a2 exp @T ln dfA (8.113)
1 N 0 C ¦a
2 8.e j2³ f T /
2T
R Z
f .a/ da
e e f .a/ da (8.114)
to (8.113), we can compare the performance of a linear equalizer given by (8.40) with that
of a DFE given by (8.113): the result is that, assuming 8.e j2³ f T / 6D constant and the
absence of detection errors in the DFE, for infinite-order filters the value Jmin of a DFE is
always smaller than Jmin of a LE.
If FF and FB filters with a finite number of coefficients are employed, in analogy with
(8.44), also for a DFE we have
2
DFE (8.115)
Jmin
where Jmin is given by (8.98). An analogous relation holds for an FS-DFE, with Jmin given
by (8.112).
Remarks
1. In the absence of errors of the data detector, the DFE has better asymptotic
(M1 ; M2 ! 1) performance than the linear equalizer. However, for a given fi-
nite number of coefficients M1 C M2 , the performance is a function of the channel
and of the choice of M1 .
2. The DFE is definitely superior to the linear equalizer for channels that exhibit large
variations of the attenuation in the passband, as in such cases the linear equalizer
tends to enhance the noise.
3. Detection errors tend to spread, because they produce incorrect cancellations. Error
propagation leads to an increase of the error probability. However, simulations in-
dicate that for typical channels and symbol error probability smaller than 5 Ð 102 ,
error propagation is not catastrophic.
4. For channels with impulse response fh i g, such that detection errors may spread catas-
trophically, instead of the DFE structure it is better to implement the linear FF equal-
izer at the receiver, and the FB filter at the transmitter as a precoder, using the
precoding method discussed in Appendix 7.A and Chapter 13.
the signal-to-noise ratio 0 D ¦a2 Ð rh .0/=¦w2Q at the equalizer input is equal to 20 dB. The
sequence of symbols ak 2 f1; 1g is a PN training sequence of length L D 63.
Adaptive LE
With reference to the scheme of Figure 8.6, we consider a LE with N D 15 coefficients.
In terms of mean-square error at convergence, the best results are obtained for a delay
D D 0 in the case of h min and D D 8 in the case of h nom ; we observe that the overall
impulse response h nom is not centered at the origin, and the delay D is the sum of the
delays introduced by fh n g and fcn g.
Figures 8.15c, d and 8.16c, d show curves of mean-square error convergence for stan-
dard LMS and RLS algorithms (see Section 3.1.2) for minimum and non-minimum phase
channels, respectively; in the plots, Jmin represents the minimum value of J achieved with
optimum coefficients computed by the direct method. We note that Jmin is 4 dB higher
than the value given by ¦w2Q , because of the noise and residual ISI at the decision point.
The impulse response fcopt;n g of the optimum LE and the overall system impulse response
f n D h Ł copt;n g are depicted in Figures 8.15a, b and 8.16a, b for the two channels.
The curves of convergence of J .k/ indicate that the RLS algorithm succeeds in achieving
convergence by the end of the training sequence, whereas the LMS algorithm still presents
a considerable offset from the optimum conditions, even though a large adaptation gain ¼
is chosen.
(a) (b)
1.5 1.5
1 1
|
|ψn|
opt,n
|c
0.5 0.5
0 0
0 5 10 15 0 5 10 15
n (c) n
0
LMS
J(k) (dB)
−5
−10
J
−15 min
−20
0 10 20 30 40 50 60
(d)
−15
Jmin
J(k) (dB)
−20
RLS
−25
−30
−35
0 10 20 30 40 50 60
k
Figure 8.15. System impulse responses and curves of mean-square error convergence,
estimated over 500 realizations, for a channel with minimum phase impulse response, and a
LE employing the LMS with ¼ D 0:062 or the RLS.
8.6. Convergence behavior of adaptive equalizers 647
(a) (b)
2 2
1.5 1.5
| copt,n |
|ψn|
1 1
0.5 0.5
0 0
0 5 10 15 0 5 10 15
n (c) n
0
LMS
J(k) (dB)
−5
−10
−15 Jmin
−20
0 10 20 30 40 50 60
(d)
−15 Jmin
J(k) (dB)
−20
RLS
−25
−30
−35
0 10 20 30 40 50 60
k
Figure 8.16. System impulse responses and curves of mean-square error convergence,
estimated over 500 realizations, for a channel with non-minimum phase impulse response,
and a LE employing the LMS with ¼ D 0:343 or the RLS.
(a) (b)
1 1
|
|ψn|
opt,n
0.5 0.5
|c
0 0
0 5 10 0 5 10
n (c) n
0
LMS
J(k) (dB)
−5
−10
−15 Jmin
−20
0 10 20 30 40 50 60
(d)
−15 Jmin
J(k) (dB)
−20
−25
RLS
−30
−35
0 10 20 30 40 50 60
k
Figure 8.17. System impulse responses and curves of mean-square error convergence,
estimated over 500 realizations, for a channel with minimum phase impulse response, and a
DFE employing the LMS with ¼ D 0:063 or the RLS.
648 Chapter 8. Channel equalization and symbol detection
(a) (b)
2 2
1.5 1.5
| copt,n |
|ψn|
1 1
0.5 0.5
0 0
0 5 10 0 5 10
n (c) n
0
LMS
J(k) (dB)
−5
−10
−15 Jmin
−20
0 10 20 30 40 50 60
(d)
−15 Jmin
J(k) (dB)
−20
−25
RLS
−30
−35
0 10 20 30 40 50 60
k
Figure 8.18. System impulse responses and curves of mean-square error convergence,
estimated over 500 realizations, for a channel with non-minimum phase impulse response,
and a DFE employing the LMS with ¼ D 0:143 or the RLS.
Adaptive DFE
We consider now the performance of a DFE as illustrated in Figure 8.13, with parameters
M1 D 10, M2 D 5, and D D 9, for both h min and h nom . Also in this case the chosen value
of D gives the best results in terms of the value of J at convergence.
Figures 8.17c, d and 8.18c, d show curves of mean-square error convergence for standard
LMS and RLS algorithms, for minimum and non-minimum phase channels, respectively;
The impulse response fcopt;n g of the optimum FF filter and the overall system impulse
response f n D h Ł copt;n g are depicted in Figures 8.17a, b and 8.18a, b, for the two
channels.
X
N 1 N2X
CN 1
yk D ci x ki D p ak p (8.116)
i D0 pDN1
8.8. DFE: alternative configurations 649
X
N 1
p D c` h p` D c0 h p C c1 h p1 C Ð Ð Ð C c N 1 h p.N 1/
`D0 (8.117)
p D N1 ; : : : ; 0; : : : ; N 2 C N 1
p D Ž pD (8.118)
DFE-ZF
We consider a receiver with a matched filter g M (see Figure 8.1) followed by the DFE
illustrated in Figure 8.19, where, to simplify the notation, we assume t0 D 0 and D D 0.
The z-transform of the QAM system impulse response is given by 8.z/, as defined in (8.25).
With reference to Figure 8.19, the matched filter output x k is input to a linear equalizer
zero forcing (LE-ZF) with transfer function 1=8.z/ to remove the ISI: therefore the LE-ZF
output is given by
From (8.29), using the property (8.26), we obtain that the spectrum Pw E .z/ of w E;k is
given by
1
Pw E .z/ D N 0 8.z/
1
8.z/8 Ł
zŁ (8.120)
1
D N0
8.z/
As 8.z/ is the z-transform of a correlation sequence, it can be factorized as (see page 53)
1
8.z/ D F.z/ F Ł
(8.121)
zŁ
where
X
C1
F.z/ D fn z n (8.122)
nD0
is a minimum-phase function, that is with poles and zeros inside the unit circle, associated
with a causal sequence ffn g.
Observation 8.9
A useful method to determine the filter F.z/ in (8.121), with a computational complexity
that is proportional to the square of the number of filter coefficients, is obtained by consid-
ering a minimum-phase prediction error filter A.z/ D 1 C a 01;N z 1 C Ð Ð Ð C a0N ;N z N (see
page 147), designed using the ACS frqC .nT /g defined by (8.23). The equation to determine
the coefficients of A.z/ is given by (2.85), where fr x .n/g is now substituted by frqC .nT /g.
The final result is F.z/ D f 0 =A.z/.
On the other hand, F Ł .1=z Ł / is a function with zeros and poles outside the unit circle,
associated with an anticausal sequence f f nŁ g.
8.8. DFE: alternative configurations 651
1
W .z/ D F.z/ (8.123)
f0
The ISI term in z k is determined by W .z/1, hence there are no precursors; the noise is
white with statistical power.N0 =f20 /. Therefore the filter w is called whitening filter (WF).
In any case, the filter composed of the cascade of LE-ZF and w is also a WF.
If aO k D ak then, for
the FB filter removes the ISI present in z k and leaves the white noise unchanged. As yk is
not affected by ISI, this structure is called DFE-ZF, for which we obtain
2jf0 j2
DFEZF D (8.125)
N0
Observation 8.10
With reference to the scheme of Figure 8.20, using a LE-ZF instead of a DFE structure
means that a filter with transfer function f0 =F.z/ is placed after the WF to produce the
signal x E;k given by (8.119). For a data detector based on x E;k , the ratio is
2
L EZ F D (8.127)
rw E .0/
be the overall system impulse response at the MF input; in (8.128) E qC is the energy of qC .
The autocorrelation of qC , sampled at instant nT, is given by
Then
.1 a 2 /
8.e j2³ f T / D E qC (8.131)
1 C a 2 2a cos.2³ f T /
and presents a minimum for f D 1=.2T /.
8.8. DFE: alternative configurations 653
With reference to the factorization (8.121), it is easy to identify the poles and zeros of
8.z/ inside the unit circle, hence
q
1
F.z/ D E qC .1 a 2 /
1 az 1
q X
C1 (8.132)
D E qC .1 a 2 / n n
a z
nD0
1 1
D .1 az/
1 E qC .1 a 2 /
f0 F Ł
zŁ (8.134)
1
D z.a C z 1 /
E qC .1 a 2 /
In this case the WF, apart from a delay of one sample .D D 1/, can be implemented
by a simple FIR with two coefficients, whose values are equal to a=.E qC .1 a 2 // and
1=.E qC .1 a 2 //.
The FB filter is a first-order IIR filter with transfer function
1 X
C1
1 F.z/ D a n z n
f0 nD1
1 (8.135)
D1
1 az 1
az 1
D
1 az 1
In this way E qC is the energy of fqCC .nT /g and Q CC .z/ is minimum phase.
Equation (8.137) represents the discrete-time model of a wireless system with a two-ray
channel. The impulse response is given by
p
qCC .nT / D E qC .q0 Žn C q1 Žn1 / (8.140)
and
p
f0 D E qC q 0 (8.144)
The WF is given by
1 1
D (8.145)
1 E qC jq0 j2 .1 bz/
f0 F Ł
zŁ
where
Ł
q1
bD
q0
We note that the WF has a pole for z D b 1 , which, recalling (8.139), lies outside the
unit circle. In this case, in order to have a stable filter, it is convenient to associate the
z-transform 1=.1 bz/ with an anticausal sequence,
1 X0 X1
D .bz/ i D .bz/ n (8.146)
1 bz i D1 nD0
8.8. DFE: alternative configurations 655
On the other hand, as jbj < 1, we can approximate the series by considering only the first
.N 1/ terms, obtaining
1 1 XN
' .bz/ n
1 E qC jq0 j2 nD0
f0 F Ł
zŁ (8.147)
1
D z N [b N C Ð Ð Ð C bz .N 1/ C z N ]
E qC jq0 j2
Consequently the WF, apart from a delay D D N , can be implemented by an FIR filter
with N C 1 coefficients.
The FB filter in this case is a simple FIR filter with one coefficient
1 q1
1 F.z/ D z 1 (8.148)
f0 q0
the scheme of Figure 8.19 is redrawn as in Figure 8.21, where the FB filter acts as a
noise predictor. In fact, for aO k D ak , the FB filter input is colored noise. By removing the
correlated noise from x E;k , we obtain yk that is composed of white noise, with minimum
variance, plus the desired symbol ak .
Figure 8.21. Predictive DFE: the FF filter is a linear equalizer zero forcing.
656 Chapter 8. Channel equalization and symbol detection
Figure 8.22. Predictive DFE with the FF filter as a minimum-MSE linear equalizer.
by (8.38). The z-transform of the overall impulse response at the FF filter output is
given by:
¦a2 8.z/
8.z/C.z/ D (8.150)
N0 C ¦a2 8.z/
As t0 D 0, the ISI in z k is given by:
N0
8.z/C.z/ 1 D (8.151)
N0 C ¦a2 8.z/
Hence, the spectral density of the ISI has the following expression:
N0 N0
PI S I .z/ D Pa .z/
N0 C ¦a2 8.z/ 1
N0 C ¦a2 8Ł
zŁ (8.152)
¦a2 .N0 /2
D
.N0 C ¦a2 8.z// 2
using (8.26) and the fact that the symbols are uncorrelated with Pa .z/ D ¦ a2 .
The spectrum of the noise in z k is given by
1
Pnoise .z/ D N 0 8.z/C.z/C Ł Ł
z
(8.153)
8.z/¦ a4
D N0
.N0 C ¦a2 8.z// 2
Therefore the spectrum of the disturbance vk in z k , composed of ISI and noise, is given by
N0 ¦a2
Pv .z/ D (8.154)
N0 C ¦a2 8.z/
We note that the FF filter could be an FSE and the result (8.154) would not change.
8.9. Benchmark performance for two equalizers 657
An alternative form is
where
X
C1
A.z/ D an0 z n a00 D 1 (8.157)
nD0
is the forward prediction error filter defined in (2.83). To determine B.z/ we use the spectral
factorization in (1.526):
¦ y2
Pv .z/ D
Ł
1
A.z/A
zŁ
(8.158)
¦ y2
D ½
1
[1 B.z/] 1 B Ł Ł
z
with Pv .z/ given by (8.154).
In conclusion, it results that the prediction error signal yk is a white noise process with
statistical power equal to ¦ y2 .
An adaptive version of the basic scheme of Figure 8.22 suggests that the two filters, c
and b, are separately adapted through the error signals fe F;k g and fe B;k g, respectively. This
configuration, although sub-optimum with respect to the DFE, is used in conjunction with
trellis-coded modulation (see Chapter 12) [4].
Performance comparison
From (8.120), the noise sequence fw E;k g can be modeled as the output of a filter having
transfer function
1
C F .z/ D (8.159)
F.z/
658 Chapter 8. Channel equalization and symbol detection
and input given by white noise with PSD N0 . Because F.z/ is causal, also C F .z/ is causal:
X
1
C F .z/ D c F;n z n (8.160)
nD0
where fc F;n g, n ½ 0, is the filter impulse response.
Then we can express the statistical power of w E;k as:
X
1
rw E .0/ D N0 jc F;n j2 (8.161)
nD0
where, from (8.159),
1 1
c F;0 D C.1/ D D (8.162)
F.1/ f0
Using the inequality
X
1
1
jc F;n j2 ½ jc F;0 j2 D (8.163)
nD0
jf0 j2
the comparison between (8.125) and (8.127) yields
LEZF DFEZF (8.164)
LE-ZF
Channel with exponential impulse response. From (8.130) the coefficient of z 0 in N0 =8.z/
is equal to
N0 1 C a 2
rw E .0/ D (8.165)
E qC 1 a 2
and, consequently, from (8.127),
E qC 1 a 2
L EZ F D 2 (8.166)
N0 1 C a 2
Using the expression of MF (7.113), obtained for a MF receiver in the absence of ISI,
we get
1 a2
L EZ F D MF (8.167)
1 C a2
Therefore the loss due to the ISI, given by the factor .1 a 2 /=.1 C a 2 /, can be very large
if a is close to 1. In this case the frequency response in (8.131) assumes a minimum value
close to zero.
8.10. Optimum methods for data detection 659
By a partial fraction expansion, we find only the pole for z D q 1 =q0 lies inside the unit
circle, hence
N0 N0
rw E .0/ D D (8.169)
q 1 E qC .jq0 j2 jq1 j2 /
E qC q0 q0Ł q1Ł
q0
Then
Also in this case we find that the LE is unable to equalize channels with a spectral zero.
DFE-ZF
In this case the advantage with respect to LE-ZF is given by the factor jq0 j2 =.jq0 j2 jq1 j2 /,
which may be substantial if jq1 j ' jq0 j.
We recall that in case E qC × N0 , that is for low noise levels, the performance of LE
and DFE are similar to the performance of LE-ZF and DFE-ZF, respectively. Anyway, for
the two systems of Examples 8.8.1 and 8.8.2 the values of in terms of Jmin are given
in [4], for both LE and DFE.
Actually, the decision criterion that minimizes the probability that an error occurs in the
detection of a symbol of the sequence fak g requires in general that the entire sequence of
received samples is considered for symbol detection.
We assume a sampled signal having the following structure:
z k D u k C wk k D 0; 1; : : : ; K 1 (8.173)
where:
where 0 is the sample of the overall system impulse response, obtained in corre-
spondence of the optimum timing phase; fL 1 ; : : : ; 1 g are the precursors, which
are typically negligible with respect to 0 . Recalling the expression of the pulse f p g
given by (8.90), we have n D nCD .
We assume the coefficients fn g are known; in practice, however, they are estimated
by the methods discussed in Appendix 3.B;
ž wk is a circularly symmetric white Gaussian noise, with equal statistical power in
the two I and Q components given by ¦ I2 D ¦w2 =2; hence the samples fwk g are
uncorrelated and therefore statistically independent.
a D [a L 1 ; a L 1 C1 ; : : : ; a L 1 CK 1 ]T ai 2 A (8.175)
aO D [aO L 1 ; aO L 1 C1 ; : : : ; aO L 1 CK 1 ]T aO i 2 A (8.176)
α D [Þ L 1 ; Þ L 1 C1 ; : : : ; Þ L 1 CK 1 ]T Þi 2 A (8.177)
z D [z 0 ; z 1 ; : : : ; z K 1 ] (8.178)
8.10. Optimum methods for data detection 661
ρ D [²0 ; ²1 ; : : : ; ² K 1 ]T ²i 2 C ρ 2 CK (8.179)
Let M be the cardinality of the alphabet A. By analogy with the analysis of Section 6.1,
we divide the vector space of the received samples, C K , into M K non-overlapping regions
Rα α 2 AK (8.180)
if ρ 2 Rα H) aO D α (8.181)
P[C] D P[Oa D a]
X
D P[Oa D α j a D α]P[a D α]
α2A K
X
D P[z 2 Rα j a D α]P[a D α] (8.182)
α2A K
X Z
D pzja .ρ j α/dρ P[a D α]
α2A K Rα
In other words, the sequence α is chosen, for which the probability to observe z D ρ is
maximum.
662 Chapter 8. Channel equalization and symbol detection
As wk is a complex-valued Gaussian r.v. with zero mean and variance ¦w2 , it follows
KY
1
1 ¦12 j²k u k j2
pzja .ρ j α/ D e w (8.188)
kD0
³ ¦w2
Taking the logarithm, which is a monotonic increasing function, of both members we get
X
K 1
ln pzja .ρ j α/ / j²k u k j2 (8.189)
kD0
The detected sequence is the sequence that yields the smallest value of 0.K 1/; as in
the case of i.i.d. symbols there are M K possible sequences, this method has a complexity
O.M K /.
3 We note that (8.187) formally requires that the vector a is extended to include the symbols aL 2 ; : : : ; a L 1 1 .
8.10. Optimum methods for data detection 663
for each possible pair of distinct α and β in A K . As the noise is white, we define
2
dmin D min d 2 .u.α/; u.β// (8.194)
α;β
where M K is the number of vectors u.a/, and Nmin is the number of vectors u.a/ whose
distance from another vector is dmin .
In practice, the exhaustive method for the computation of the expressions in (8.192) and
(8.194) is not used; the Viterbi algorithm, that will be discussed in the next section, is
utilized instead.
sk 2 S D fσ 1 ; σ 2 ; : : : ; σ Ns g (8.197)
Observation 8.11
We denote by s 0k1 the vector that is obtained by removing from s k1 the oldest symbol,
akL 2 . Then
X
k
0k D j²i u i j2 (8.200)
i D0
the following recursive equation holds:
0k D 0k1 C j²k u k j2 (8.201)
Example 8.10.1
Let us consider a transmission system with symbols taken from a binary alphabet, that is
M D 2, ak 2 f1; 1g, and overall impulse response characterized by L 1 D L 2 D 1.
In this case we have
sk D .akC1 ; ak / (8.203)
and the set of states contains Ns D 22 D 4 elements:
S D fσ 1 D .1; 1/; σ 2 D .1; 1/; σ 3 D .1; 1/; σ 4 D .1; 1/g (8.204)
The possible transitions
sk1 D σ i ! sk D σ j (8.205)
are represented in Figure 8.23, where a dot indicates a possible value of the state at a
certain instant, and a branch indicates a possible transition between two states at consecutive
instants. According to (8.198), the variable that determines a transition is akCL 1 . Figure 8.23,
extended for all instants k, is called a trellis diagram. We note that in this case there are
exactly M transitions that leave each state sk1 ; likewise there are M transitions that arrive
to each state sk .
With each state σ j , j D 1; : : : ; Ns , at instant k we associate two quantities:
2. the survivor sequence, defined as the sequence of symbols that ends in that state and
determines 0.sk D σ j /:
L.sk D σ j / D .s0 ; s1 ; : : : ; sk D σ j / D .a L 1 ; : : : ; akCL 1 / (8.207)
8.10. Optimum methods for data detection 665
σ 1 =(-1,-1) -1 (-1,-1)
1
σ 2 =(-1,1) -1 (-1,1)
1
σ 3 =(1,-1) -1 (1,-1)
1
σ 4=(1,1) -1 (1,1)
1
a k+N A
1
Figure 8.23. Portion of the trellis diagram showing the possible transitions from state sk1
to state sk , as a function of the symbol akCL1 2 A.
Note that the notion of survivor sequence can be equivalently applied to a sequence
of symbols or to a sequence of states.
These two quantities are determined recursively. In fact, it is easy to verify that if, at
instant k, a survivor sequence of states includes sk D σ j then, at instant k 1, the same
sequence includes sk1 D σ iopt , which is determined as follows:
Analogously, if the final state s K is equal to σ f 0 , the optimum sequence of states co-
incides with the survivor sequence associated with the state s K 1 D σ j having minimum
cost among those that admit a transition into s K D σ f 0 .
666 Chapter 8. Channel equalization and symbol detection
Example 8.10.2
Let us consider a system with the following characteristics: ak 2 f1; 1g, L 1 D 0, L 2 D 2,
K D 4, and s1 D .1; 1/. The development of the survivor sequences on the trellis
diagram from k D 0 to k D K 1 D 3 is represented in Figure 8.24a. The branch metric,
j²k f .σ j ; σ i /j2 , associated with each transition is given in this example and is written
above each branch. The survivor paths associated with each state are represented in bold;
we note that some paths are abruptly interrupted and not extended at the following instant:
for example the path ending at state σ 3 at instant k D 1.
Figure 8.24b illustrates how the survivor sequences of Figure 8.24a are determined.
Starting with s1 D .1; 1/ we have
0.s1 D σ 1 / D 0
0.s1 D σ 2 / D 1
(8.212)
0.s1 D σ 3 / D 1
0.s1 D σ 4 / D 1
We observe that the result (8.213) is obtained for s1 D σ 1 . Then the survivor sequence
associated with s0 D σ 1 is L.s0 D σ 1 / D .1/, expressed as a sequence of symbols rather
than states.
Considering now s0 D σ 2 , σ 3 , and σ 4 , in sequence, and applying (8.208), the first
iteration is completed. Obviously, there is no interest in determining the survivor sequence
for states with metric equal to 1, because the corresponding path will not be extended.
Next, for k D 1, the final metrics and the survivor sequences are shown in the second
diagram of Figure 8.24b, where we have
The same procedure is repeated for k D 2; 3, and the trellis diagram is completed. The
minimum among the values assumed by 0.s3 / is
a0 D 1 a1 D 1 a2 D 1 a3 D 1 (8.219)
In practice, if the parameter K is large, the length of the survivor sequences is limited to
a value K d , called trellis depth or path memory depth, that typically is between 3 and 10
times the length of the channel impulse response: this means that at every instant k we
decide on akK d CL 1 , a value that is then removed from the diagram. The decision is based
on the survivor sequence associated with the minimum among the values of 0.sk /.
In practice, k and 0.sk / may become very large; then the latter value is usually normal-
ized by subtracting the same amount from all the metrics, for example the smallest of the
metrics 0.sk D σ i /, i D 1; : : : ; Ns .
which represents the a priori probability of the generic symbol. There are algorithms
to iteratively estimate this probability from the output of another decoder or equalizer
(see Section 11.5). Here, for the time being, we assume there is no a priori knowledge
on the symbols. Consequently, for i.i.d. symbols, we have that every state sk D σ j
can be reached by M states, and
1
P[akCL 1 D þ] D þ2A (8.225)
M
If there is no transition from sk1 D σ i to sk D σ j , we set
5. j j i/ D 0 (8.226)
1 ¦12 j²k u k j2
pzk .²k j j; i/ D e w (8.228)
³ ¦w2
where u k D f .σ j ; σ i /.
4. We merge (8.223) and (8.227) by defining the variable
C k . j j i/ D P[z k D ²k ; sk D σ j j sk1 D σ i ]
D pzk .²k j j; i/ 5. j j i/
s1 D σ i0 s K D σ f0 (8.233)
we set
(
1 for σ j D σ i0
pN j D (8.234)
0 otherwise
670 Chapter 8. Channel equalization and symbol detection
and
(
1 for σ j D σ f0
qN j D (8.235)
0 otherwise
a) Forward metric
Fk . j/ D P[z0k D ρ 0k ; sk D σ j ] (8.236)
1. Initialization
F1 . j/ D pN j j D 1; : : : ; Ns (8.237)
2. Updating for k D 0; 1; : : : ; K 1,
Ns
X
Fk . j/ D C k . j j `/ Fk1 .`/ j D 1; : : : ; Ns (8.238)
`D1
Proof. Using the total probability theorem, and conditioning the event on the possible values
of s k1 , we express the probability in (8.236) as
Ns
X
Fk . j/ D P[z0k1 D ρ 0k1 ; z k D ²k ; s k D σ j ; s k1 D σ ` ]
`D1
Ns
X
D P[z0k1 D ρ 0k1 ; z k D ²k j s k D σ j ; s k1 D σ ` ] P[s k D σ j ; s k1 D σ ` ]
`D1
(8.239)
Because the noise samples are i.i.d., once the values of s k and s k1 are assigned, the event
[z0k1 D ρ 0k1 ] is independent of the event [z k D ²k ], and it results in
Ns
X
Fk . j/ D P[z0k1 D ρ 0k1 j s k D σ j ; s k1 D σ ` ]
`D1 (8.240)
Moreover, given s k1 , the event [z0k1 D ρ 0k1 ] is independent of s k , and we have
Ns
X
Fk . j/ D P[z0k1 D ρ 0k1 j s k1 D σ ` ]
`D1 (8.241)
Ns
X
D P[z0k1 D ρ 0k1 ; s k1 D σ ` ] P[z k D ²k ; s k D σ j j s k1 D σ ` ]
`D1
(8.242)
b) Backward metric
K 1 K 1
Bk .i/ D P[zkC1 D ρ kC1 j sk D σ i ] (8.243)
Equation (8.243) is the probability of observing the sequence ²kC1 ; : : : ; ² K 1 , from instant
k C 1 onwards, given the state σ i at instant k.
A recursive expression also exists for Bk .i/.
1. Initialization
2. Updating for k D K 1; K 2; : : : ; 0,
Ns
X
Bk .i/ D BkC1 .m/ C kC1 .m j i/ i D 1; : : : ; Ns (8.245)
mD1
Proof. Using the total probability theorem, and conditioning the event on the possible values
of s kC1 , we express the probability in (8.243) as
Ns
X
K 1 K 1
Bk .i/ D P[zkC1 D ρ kC1 ; s kC1 D σ m j s k D σ i ]
mD1
(8.246)
Ns
X
K 1 K 1
D P[zkC1 D ρ kC1 j s kC1 D σ m ; s k D σ i ] P[s kC1 D σ m j s k D σ i ]
mD1
672 Chapter 8. Channel equalization and symbol detection
K 1 K 1
Now, given the values of s kC1 and s k , the event [zkC2 D ρ kC2 ] is independent of [z kC1 D
K 1 K 1
²kC1 ]. In turn, assigned the value of s kC1 , the event [zkC2 D ρ kC2 ] is independent of s k .
Then (8.246) becomes
Ns
X
K 1 K 1
Bk .i/ D P[zkC2 D ρ kC2 j s kC1 D σ m ; s k D σ i ]
mD1
c) State metric
Vk .i/ D P[sk D σ i j z0K 1 D ρ 0K 1 ] (8.248)
Equation (8.248) expresses the probability of being in the state σ i at instant k, given the
whole observation ρ 0K 1 . It can be expressed as a function of the forward and backward
metrics,
Fk .i/ Bk .i/
Vk .i/ D i D 1; : : : ; Ns (8.249)
Ns
X
Fk .n/ Bk .n/
nD1
Proof. Using the fact that, given the value of s k , the r.v.s fz t g with t > k are statistically
independent of fz t g with t k, from (8.248) it follows
K 1 K 1 1
Vk .i/ D P[z0k D ρ 0k ; zkC1 D ρ kC1 ; sk D σ i ]
P[z0K 1 D ρ 0K 1 ]
(8.250)
K 1 K 1 1
D P[z0k D ρ 0k ; s k D σ i ] P[zkC1 D ρ kC1 j sk D σ i ]
P[z0K 1 D ρ 0K 1 ]
Observing the definitions of forward and backward metrics, (8.249) follows.
We note that the normalization factor
Ns
X
P[z0K 1 D ρ 0K 1 ] D Fk .n/ Bk .n/ (8.251)
nD1
d) Likelihood function of the generic symbol. Applying the total probability theorem to
(8.220) we obtain the relation
Ns
X
Lk .þ/ D P[akCL 1 D þ; sk D σ i j z0K 1 D ρ 0K 1 ] (8.253)
i D1
Ns
X
Lk .þ/ D Vk .i/ þ2A (8.254)
i D1
condition
[σ i ]1 D þ
In other words, at instant k the likelihood function coincides with the sum of the metrics
Vk .i/ associated with the states whose first component is equal to the symbol of value þ.
Note that Lk .þ/ can also be obtained using the state metrics evaluated at different instants,
that is
Ns
X
Lk .þ/ D VkC.m1/ .i/ þ2A (8.255)
i D1
[σ i ]m D þ
for m 2 f1; : : : ; L 1 C L 2 g
Scaling
We see that, due to the exponential form of pzk .²k j j; i/ in Ck . j j i/, in a few iterations
the forward and backward metrics may assume very small values; this leads to numerical
problems in the computation of the metrics: therefore we need to substitute equations
(8.238) and (8.245) with analogous expressions that are scaled by a suitable coefficient.
We note that the state metric (8.249) does not change if we multiply Fk .i/ and Bk .i/,
i D 1; : : : ; Ns , by the same coefficient Kk . The idea [5] is to choose
1
Kk D (8.256)
Ns
X
Fk .n/
nD1
Indicating with FNk .i/ and BN k .i/ the normalized metrics, for
Ns
X
Fk . j/ D C k . j j `/ FNk1 .`/ (8.257)
`D1
674 Chapter 8. Channel equalization and symbol detection
Ns
X
C k . j j `/ FNk1 .`/
`D1 j D 1; : : : ; Ns
FNk . j/ D (8.258)
N
XXs Ns k D 0; 1; : : : ; K 1
C k .n j `/ FNk1 .`/
nD1 `D1
Ns
X
BN kC1 .m/ C kC1 .m j i/
mD1 i D 1; : : : ; Ns
BN k .i/ D (8.259)
Ns X
X Ns k D K 1; K 2; : : : ; 0
C k .n j `/ FNk1 .`/
nD1 `D1
Hence,
Vk .þ/ D P[z k D ²k ; sk D ¦i ]
D P[z k D ²k ; ak D þ]
1
j² þj2
DKe ¦w2 k þ2A (8.261)
1
j² þj2
¦w2 k
e
Vk .þ/ D þ2A k D 0; 1; : : : ; K 1 (8.262)
X
1
j² Þj2
e ¦w2 k
Þ2A
Observation 8.12
In the particular case of absence of ISI, substitution of (8.263) in (8.265) yields
j²k þj2
`k .þ/ D (8.269)
¦w2
and (8.268) becomes
aO k D arg min j²k þj2 (8.270)
þ2A
We recall that in the absence of ISI the expression of Lk .þ/ is given by (8.262). The
analysis is simplified by the introduction of the log-likelihood ratio (LLR),
In other words, the sign of the log-likelihood ratio yields a hard-decision on the symbol
being detected, while the magnitude indicates the reliability of the decision. The function
`k is used as a soft-decision parameter in some detection algorithms (see page 923).
In the Max-Log-MAP formulation, the LLR is given by
and
Apart from non-essential constants, the Max-Log-MAP algorithm in the case of trans-
mission of i.i.d. symbols over a channel with additive white Gaussian noise is formulated
as follows.
ck . j j i/ D j²k u k j2 i; j D 1; : : : ; Ns (8.278)
c K . j j i/ D 0 i; j D 1; : : : ; Ns (8.279)
then we have
The extension to more variables is readily obtained by induction. So, if in the backward
and forward procedures of page 676 we substitute the max function with the maxŁ function,
we obtain the exact Log-MAP formulation that relates vk .i/ D ln Vk .i/ to bk .i/ D ln Bk .i/
and f k .i/ D ln Fk .i/, using the branch metric ck . j j i/.
1. The first, illustrated in Figure 8.25a, is considered for the low implementation com-
plexity; it refers to the receiver of Figure 7.12, where
s
f
GRc . f / D rcos ;² (8.289)
1=T
and wC .t/ is white Gaussian noise with spectral density N0 . Recalling that r R .t/ D
s R .t/ C w R .t/, with
f
Pw R . f / D Pw . f / jGRc . f /j2 D N0 rcos ;² (8.290)
1=T
G R (f)= rcos
C (1/Tf ,ρ) t 0+kT
ak sC(t) rC(t) rR (t) zk
q gRc
C
T T
wC (t)
(AWGN)
(a)
(b)
Figure 8.25. Two receiver structures with i.i.d. noise samples at the decision point.
8.11. Optimum receivers for transmission over dispersive channels 679
it is important to verify that the noise sequence fwk D w R .t0 C kT /g has a constant
spectral density equal to N0 , and the variance of the noise samples is ¦w2 D N0 =T .
Although the filter defined by (8.289) does not necessarily yield a sufficient statistic
(see Observation 8.13 on page 681), it considerably reduces the noise and this may
be useful in estimating the channel impulse response. Another problem concerns the
optimum timing phase, which may be difficult to determine for non-minimum phase
channels.
2. An alternative, also known as the Forney receiver, is represented in Figure 8.20 and
repeated in Figure 8.25b.
To construct the WF, however, it is necessary to determine poles and zeros of the
function 8.z/; this can be rather complicated in real time applications. A practical
method is based on Observation 8.9 on page 650. From the knowledge of the auto-
correlation sequence of the channel impulse response, the prediction error filter A.z/
is determined. The WF of Figure 8.25 is given by W .z/ D 12 AŁ .1=z Ł /; therefore
f0
it is an FIR filter. The impulse response fn g is given by the inverse z-transform of
1
f0 F.z/ D 1=A.z/. For the realization of symbol detection algorithms, a windowed
version of the impulse response is considered.
A further method, which usually requires a filter w with a smaller number of
coefficients than the previous method, is based on the observation that, for channels
with low noise level, the DFE solution determined by the MSE method coincides
with the DFE-ZF solution; in this case the FF filter plays the role of the filter w in
Figure 8.25b. We consider two cases.
Actually, unless the length of fn g is shorter than that of the impulse response qC , it is
convenient to use Ungerboeck’s formulation of the MLSD [8] that utilizes only samples
fx k g at the MF output; now, however, the metric is no longer Euclidean. The derivation of
the non-Euclidean metric is the subject of the next section.
We note that, as it is not important to obtain the likelihood of an isolated symbol from
the non-Euclidean metric, there are cases in which this method is not adequate. We refer
in particular to the case in which decoding with soft input is performed separately from
680 Chapter 8. Channel equalization and symbol detection
symbol detection in the presence of ISI (see Section 11.3.2). However, joint decoding and
detection are always possible using a suitable trellis (see Section 11.3.2).
For a suitable basis, we consider for (8.292) the following vector representation:
rDsCw
where rqC is the autocorrelation of qC , whose samples are given by (see (8.23))
x k D x.t0 C kT / (8.303)
In (8.298) the first term can be ignored since it does not depend on α, while the other
two terms are rewritten in the following form:
( " # )
X
K 1 X
K X
1 K 1
`.α/ D 2Re Þk x k C
Ł
Þk1 Þk2 h k2 k1
Ł
(8.304)
kD0 k1 D0 k2 D0
Observation 8.13
The sequence of samples fx k g, taken by sampling the MF output signal with sampling
period equal to the symbol period T , forms a sufficient statistic to detect the message fak g
associated with the signal rC defined in (8.292).
We express the double summation in (8.304) as the sum of three terms, the first for
k1 D k2 , the second for k1 < k2 , and the third for k1 > k2 :
X
K X
1 K 1
AD Þk1 ÞkŁ2 h k2 k1
k1 D0 k2 D0
(8.305)
X
K 1 X
K 1 kX
1 1 X
K 1 kX
2 1
D Þk1 ÞkŁ1 h 0 C Þk1 ÞkŁ2 h k2 k1 C Þk1 ÞkŁ2 h k2 k1
k1 D0 k1 D1 k2 D0 k2 D1 k1 D0
In particular, if
To maximize `.α/ or, equivalently, to minimize `.α/ with respect to α we apply the
Viterbi algorithm (see page 663) with the state vector defined as
Extensions of Ungerboeck’s approach to time variant radio channels are proposed in [9].
error. For the purpose of determining the symbol error probability, the concept of error event
is introduced. Let fσ g D .σ i0 ; : : : ; σ i K 1 / be the realization of the state sequence associated
with the information sequence, and let fσO g be the sequence chosen by the Viterbi algorithm.
In a sufficiently long time interval, the paths in the trellis diagram associated with fσ g and
fσO g diverge and converge several times: every distinct separation from the correct path is
called an error event.
Definition 8.1
An error event e is defined as a path in the trellis diagram that has only the initial and final
states in common with the correct path; the length of an error event is equal to the number
of nodes visited in the trellis before rejoining with the correct path.
Error events of length one and two are illustrated in a trellis diagram with two states,
where the correct path is represented by a continuous line, in Figure 8.26a and Figure 8.26b,
respectively.
Let E be the set of all error events beginning at instant i. Each element e of E is
characterized by a correct path fσ g and a wrong path fσO g, which diverges from fσ g at
instant i and converges at fσ g after a certain number of steps in the trellis diagram. We
assume that the probability P[e] is independent of instant i: this hypothesis is verified
with good approximation if the length of the trellis diagram is much greater than the
length of the significant error events. An error event produces one or more errors in the
detection of symbols of the input sequence. We have a detection error at instant k if the
detection of the input at the k-th stage of the trellis diagram is not correct. We define the
function [10]
(
1 if e causes a detection error at the instant i C m ; with m 0
cm .e/ D
0 otherwise
(8.313)
The probability of a particular error event that starts at instant i and causes a detection
error at instant k is given by cki .e/P[e]. Because the error events in E are disjointed,
we have
X
k X
Pe D P[aO k 6D ak ] D cki .e/ P[e] (8.314)
i D1 e2E
(a) (b)
Figure 8.26. Error events of length (a) one and (b) two in a trellis diagram with two states.
684 Chapter 8. Channel equalization and symbol detection
which indicates the total number of detection errors caused by the error event e. There-
fore,
X
Pe D N .e/P[e] (8.317)
e2E
where the dependence on the time index k vanishes. We therefore find that the detection
error probability is equal to the average number of errors caused by all the possible error
events initiating at a given instant i; this result is expected, because the detection error
probability at a particular instant k must take into consideration all error events that initiate
at previous instants and are not yet terminated.
If fsg D .s0 ; : : : ; s K 1 / denotes the random variable sequence of states at the trans-
mitter and fOs g D .Os 0 ; : : : ; sO K 1 / denotes the random variable sequence of states selected
by the ML receiver, the probability of an error event e beginning at a given instant i
depends on the joint probability of the correct and incorrect path, and it can be written
as
P[e] D P[fOs g D fσO g j fsg D fσ g]P[fsg D fσ g] (8.318)
Because it is usually difficult to find the exact expression for P[fOs g D fσO g j fsg D fσ g], we
resort to upper and lower limits.
Upper limit. Because detection of the sequence of states fsg is obtained by observing the
sequence fug; for the signal in (8.173) with zero mean additive white Gaussian noise having
variance ¦ I2 per dimension, we have the upper limit
d[u.fσ g/; u.fσO g/]
P[fOs g D fσO g j fsg D fσ g] Q (8.319)
2¦ I
where d[u.fσ g/; u.fσO g/] is the Euclidean distance between signals u.fσ g/ and u.fσO g/, given
by (8.193). Substitution of the upper limit in (8.317) yields
X
d[u.fσ g/; u.fσO g/]
Pe N .e/ P[fsg D fσ g]Q (8.320)
e2E 2¦ I
which can be rewritten as follows, by giving prominence to the more significant terms,
X
dmin
Pe N .e/ P[fsg D fσ g]Q C other terms (8.321)
e2Emin 2¦ I
8.12. Error probability achieved by MLSD 685
where Emin is the set of error events at minimum distance dmin defined in (8.194), and the
remaining terms are characterized by arguments of the Q function larger than dmin =.2¦ I /.
For higher values of the signal-to-noise ratio these terms are negligible and the following
approximation holds
dmin
Pe K1 Q (8.322)
2¦ I
where
X
K1 D N .e/ P[fsg D fσ g] (8.323)
e2Emin
Lower limit. A lower limit to the error probability is obtained by considering the proba-
bility that any error event may occur rather than the probability of a particular error event.
Since N .e/ ½ 1 for all the error events e, from (8.317) we have
X
Pe ½ P[e] (8.324)
e2E
Let us consider a particular path in the trellis diagram determined by the sequence of states
fσ g. We set
dmin .fσ g/ D min d[u.fσ g/; u.fσQ g/] (8.325)
fσQ g
i.e., for this path, dmin .fσ g/ is the Euclidean distance of the minimum distance error event.
We have dmin .fσ g/ ½ dmin , where dmin is the minimum distance obtained considering all
the possible state sequences. If fσ g is the correct state sequence, the probability of an error
event is lower limited by
dmin .fσ g/
P[e j fsg D fσ g] ½ Q (8.326)
2¦ I
Consequently,
X
dmin .fσ g/
Pe ½ P[fsg D fσ g]Q (8.327)
fσ g
2¦ I
If some terms are omitted in the equation, the lower limit is still valid, because the terms
are non-negative. Therefore, taking into consideration only those state sequences fσ g for
which dmin .fσ g/ D dmin , we obtain
X
dmin
Pe ½ P[fsg D fσ g]Q (8.328)
fσ g2A
2¦ I
where A is the set of state sequences that admit an error event with minimum distance
dmin , for an arbitrarily chosen initial instant of the given error event. Defining
X
K2 D P[fsg D fσ g] (8.329)
fσ g2A
686 Chapter 8. Channel equalization and symbol detection
as the probability that a path fσ g admits an error event with minimum distance, it is
dmin
Pe ½ K2 Q (8.330)
2¦ I
Combining upper and lower limits we obtain
dmin dmin
K2 Q Pe K1 Q (8.331)
2¦ I 2¦ I
For large values of the signal-to-noise ratio, therefore we have
dmin
Pe ' K Q (8.332)
2¦ I
for some value of the constant K between K1 and K2 .
We stress that the error probability, expressed by (8.332) and (8.195), is determined by
the ratio between the minimum distance dmin and the standard deviation of the noise ¦ I .
Here the expressions of the constants K1 and K2 are obtained by resorting to vari-
ous approximations. An accurate method to calculate upper and lower limits of the error
probability is proposed in [11].
and variance ¦ I2 per dimension. The metric that the Viterbi algorithm attributes to the
sequence of states corresponding to the sequence of input symbols fak g is given by the
squared Euclidean distance between the sequence of samples fz k g at the detector input and
its mean value, which is known, given the sequence of symbols (see (8.189)),
þ þ2
X1 þ XL2 þ
þ þ
þz k n akn þ (8.334)
kD0
þ nDL
þ
1
In the previous section it was demonstrated that the symbol error probability is given
by (8.332). In particularly simple cases, the minimum distance can be determined by direct
inspection of the trellis diagram; in practice, however, this situation is rarely verified in
channels with ISI. To evaluate the minimum distance it is necessary to resort to simulations.
To find the minimum distance error event with initial instant k D 0, we consider the
8.12. Error probability achieved by MLSD 687
desired signal u k under the condition that the sequence fak g is transmitted, and we compute
the squared Euclidean distance between this signal and the signal obtained for another
sequence faQ k g,
þ þ2
X1 þ XL2 XL2 þ
þ þ
2
d [u.fak g/; u.faQ k g/] D þ n akn n aQ kn þ (8.335)
þ
kD0 nDL nDL
þ
1 1
where it is assumed that the two paths identifying the state sequences are identical for k < 0.
It is possible to avoid computing the minimum distance for each sequence fak g if we
exploit the linearity of the ISI. Defining
žk D ak aQ k (8.336)
we have
þ þ2
X1 þ XL2 þ
þ þ
d .fžk g/ D d [u.fak g/; u.faQ k g/] D
2 2
þ n žkn þ (8.337)
þ
kD0 nDL
þ
1
The minimum among the squared Euclidean distances relative to all error events that initiate
at k D 0 is
2
dmin D min d 2 .fžk g/ (8.338)
fžk g: žk D0; k<L 1 ;ž L 1 6D0
the minimization problem is equivalent to determining the path in a trellis diagram that has
minimum metric (8.337) and differs from the path that joins states corresponding to correct
decisions: the resulting metric is dmin 2 . We note, however, that the cardinality of ž is larger
k
than M, and this implies that the complexity of this trellis diagram can be much larger
than that of the original trellis diagram. In the PAM case, the cardinality of žk is equal to
2M 1.
In practice, as the terms of the series in (8.337) are non-negative, if we truncate the
series after a finite number of terms we obtain a result that is smaller than or equal to the
effective value. Therefore a lower limit to the minimum distance is given by
þ þ2
X
K 1 þ X
L2 þ
þ þ
2
dmin ½ min þ n žkn þ (8.340)
fžk g: žk D0; k<L 1 ;ž L 1 6D0 þ
kD0 nDL
þ
1
Example 8.12.1
We consider the partial response system class IV (PR-IV), also known as modified duobinary
(see Appendix 7.A), illustrated in Figure 8.27. The transfer function of the discrete-time
688 Chapter 8. Channel equalization and symbol detection
wk
BMAP
bk 0
←
−1 a k uk zk a^ k
← η (D) MLSD
1 +1 T
bk ... 0 1 1 0 1 0 0 ...
ak ... −1 +1 +1 −1 +1 −1 −1 ...
Figure 8.28. Input and output sequences for an ideal PR-IV system.
overall system is given by .D/ D 1 D 2 . For an ideal noiseless system, the input sequence
fu k g to the detector is formed by random variables taking values in the set f2; 0; C2g, as
shown in Figure 8.28.
Assuming that the sequence of noise samples fwk g is composed of real-valued, sta-
tistically independent, Gaussian random variables with mean zero and variance ¦ I2 , and
observing that u k for k even (odd) depends only on symbols with even (odd) indices, the
MLSD receiver for a PR-IV system is usually implemented by considering two interlaced
dicode independent channels, each having a transfer function given by 1 D. As seen
in Figure 8.29, for detection of the two interlaced input symbol sequences, two trellis
diagrams are used. The state at instant k is given by the symbol sk D ak , where k is
1 0 1 0 1 0 1 0
+1
+2 +2 +2 +2
0 0 0 0
1 –2 1 –2 1 –2 1 –2
–1
0 0 0 0 0 0 0 0
1 0 1 0 1 0 1 0
+1
+2 +2 +2 +2
0 0 0 0
1 –2 1 –2 1 –2 1 –2
–1
0 0 0 0 0 0 0 0
k= 0 1 2 3 4 5
b0 =0 b1 =0
b0 =0 b1 =0
b 2 =1 b3 =1
b0 =0 b1 =0
b 2 =1 b3 =1
b0 =0 b1 =0
Figure 8.30. Survivor sequences at successive iterations of the Viterbi algorithm for a dicode
channel.
even-valued in one of the two diagrams and odd-valued in the other. Each branch of the
diagram is marked with a label that represents the binary input symbol bk or, equivalently,
the value of the dicode signal u k . For a particular realization of the output signal of a
dicode channel, the two survivor sequences at successive iterations of the Viterbi algorithm
are represented in Figure 8.30.
It is seen that the minimum squared Euclidean distance between two separate paths in
the trellis diagram is given by dmin 2 D 22 C 22 D 8. However, we note that for the same
initial instant, there are an infinite number of error events with minimum distance from the
effective path: this fact is evident in the trellis diagram of Figure 8.31a, where the state
sk D .žk / 2 f2; 0; C2g characterizes the development of the error event, and the labels
indicate the branch metrics associated with an error event. It is seen that at every instant
an error event may be extended along a path, for which the metric is equal to zero, parallel
690 Chapter 8. Channel equalization and symbol detection
sk
0 0
–2
4 4
4 4 4
16 16
16 16 4 4 4
4 4
+2
0 0
(a) (b)
Figure 8.31. Examples of (a) trellis diagram to compute the minimum distance for a dicode
channel and (b) four error events with minimum distance.
to the path corresponding to the zero sequence. Paths of this type correspond to a sequence
of errors having the same polarity. Four error events with minimum distance are shown in
Figure 8.31b. p
The error probability is given by KQ 2¦8I , where K2 K K1 . The constant K2
can be immediately determined by noting that every effective path admits at every in-
stant at least one error event with minimum distance: consequently K2 D 1. To find
K1 , we consider the contribution of an error event with m consecutive errors. For this
event to occur, it is required that m consecutive input symbols have the same polarity,
which happens with probability 2m . Since such an error event determines m symbol errors
and two error events with identical characteristics can be identified in the trellis diagram,
we have
X
1
K1 D 2 m 2m D 4 (8.341)
mD1
p
Besides the error events associated with the minimum distance dmin D 8, the occur-
rence of long sequences of identical output symbols from a dicode channel raises two
problems.
ž The system becomes catastrophic in the sense that in the trellis diagram used by
the detector, valid sequences of arbitrary length are found having squared Euclidean
distance from the effective path equal to 4; therefore, a MLSD receiver withp
fi-
4
nite memory will make additional errors with probability proportional to Q 2¦ I if
the channel produces sequences fu k D 0g of length larger than the memory of the
detector.
ž The occurrence of these sequences is detrimental for the control of receive filter gain
and sampling instants: the problem is solved by suitable coding of the input binary
sequence, that sets a limit on the number of consecutive identical symbols that are
allowed at the channel input.
8.13. Reduced state sequence detection 691
4 We will maintain the name RSSE, although the algorithm is applied to perform a detection rather than an
estimation.
692 Chapter 8. Channel equalization and symbol detection
response that determine the ISI and assume, without loss of generality, the desired sample
is f 0 D 1. Hence the observed signal is given by
X
N
zk D f n akn C wk D ak C hsk1 ; fi C wk (8.342)
nD0
and
fT D [ f 1 ; f 2 ; : : : ; f N ] (8.344)
Observation 8.14
In the transmission of sequences of blocks of data, the RSSE yields best performance by
imposing the final state, for example, using the knowledge of a training sequence. Therefore
the formulation of this section is suited for the case of a training sequence placed at the
end of the data block. In the case the training sequence is placed at the beginning of a data
block, it is better to process the signals in backward mode as described in Observation 8.6
on page 642.
The RSSE maintains the fundamental structure of MLSD unaltered, corresponding to
the search in the trellis diagram of the path with minimum cost.
To reduce the number of states to be considered, we introduce for every component
akn , n D 1; : : : ; N , of the vector sk1 defined in (8.343) a suitable partition .n/ of the
two-dimensional set A of possible values of akn : a partition is composed of Jn subsets,
with Jn an integer between 1 and M. The index of the subset of the partition .n/ to
which the symbol akn belongs is indicated by cn , an integer value between 0 and Jn 1.
Ungerboeck’s partitioning of the symbol set associated with a 16-QAM system is illustrated
in Figure 8.32. The partitions must satisfy the following two conditions (see also Chapter 12,
or, for a more general partitioning method, [17]):
Therefore we define as reduced state at instant k 1 the vector tk1 that has as n-th
element the index cn of the subset of the partition .n/ to which the n-th element of sk1
belongs, for n D 1; 2; : : : ; N , that is
T
tk1 D [c1 ; c2 ; : : : ; c N ] cn 2 f0; 1; : : : ; Jn 1g (8.345)
and we write
}J n
= 1
0 1
}J n
= 2
0 2 1 3
}J n
= 4
0 4 2 6 1 5 3 7
}J n
= 8
0 8 4 12 2 10 6 14 1 9 5 13 3 11 7 15
}J n
= 16
Figure 8.32. Ungerboeck’s partitioning of the symbol set associated with a 16-QAM system.
The various subsets are identified by the value of cn 2 f0; 1; : : : ; Jn 1g.
It is useful to stress that the reduced state tk1 does not uniquely identify a state sk1 ,
but all the states sk1 that include as n-th element one of the symbols belonging to the
subset cn of partition .n/.
The conditions imposed on the partitions guarantee that, given a reduced state at instant
k 1, tk1 , and the subset j of partition .1/ to which the symbol ak belongs, the reduced
state at instant k, tk , can be uniquely determined. In fact, observing (8.345), we have
where c10 D j, c20 is the index of the subset of the partition .2/ to which belongs the
subset with index c1 of the partition .1/, c30 is the index of the subset of the partition
.3/ to which belongs the subset with index c2 of the partition .2/, and so forth. In this
way the reduced states tk1 define a proper reduced state trellis diagram, that represents
all the possible sequences fak g.
As the symbol cn can only assume one of the integer values between 0 and Jn 1, the
total number of possible reduced states of the trellis diagram of the RSSE is given by the
product Ns D J1 J2 : : : J N , with Jn M, for n D 1; 2; : : : ; N .
We know that in the VA, for uncoded transmission of i.i.d. symbols, there are M possible
transitions from a state, one for each of the values that ak can assume. In the reduced state
trellis diagram M transitions are still possible from a state, however, to only J1 distinct
694 Chapter 8. Channel equalization and symbol detection
states, thus giving origin to parallel transitions.5 In fact, if J1 < M, J1 sets of branches
depart from every state tk1 , each set consisting of as many parallel transitions as there are
symbols belonging to the subset of .1/ associated with the reduced state.
Therefore partitions must be obtained such that two effects are guaranteed: 1) minimum
performance degradation with respect to MLSD, and 2) easy search of the optimum path
among the various parallel transitions. The method that is usually adopted is Ungerboeck’s
set partitioning method, which, for every partition , maximizes the minimum distance 1
among the symbols belonging to the same subset. For QAM systems and Jn a power of
2, the maximum distance 1n relative to partition .n/ is obtained through a tree diagram
with binary partitioning (see Chapter 12). An example of partitioning of the symbol set
associated with a 16-QAM system is illustrated in Figure 8.32.
In Figure 8.33 two examples of reduced state trellis diagram are shown, both referring
to the partition of Figure 8.32.
RSSE algorithm
As in MLSD, for each transition that originates from a state tk1 the RSSE computes the
branch metric according to the expression
t k-1 tk
[0, 0] [0, 0]
[0] [0 ]
[1, 0] [1, 0]
[1] [1]
[1, 1] [1, 1]
[2, 0] [2, 0]
[2] [2]
[2, 1] [2, 1]
[3] [3]
[3, 0] [3, 0]
[3, 1] [3, 1]
(a) N D 1, J1 D 4. (b) N D 2, J1 D 4, J2 D 2.
5 Parallel transitions are present when two or more branches connect the same pair of states in the trellis diagram.
8.13. Reduced state sequence detection 695
6 We note that if the points of the subsets of the partition .1/ are on a rectangular grid, as in the example of
Figure 8.32, the value of the symbol ak that minimizes the metric is determined through simple “quantization
rules”, without explicitly evaluating the branch metric for every symbol of the subset. Hence, for every state,
only J1 explicit computations of the branch metric are needed.
696 Chapter 8. Channel equalization and symbol detection
a. The estimator associated with the survivor sequence uses symbols that can be consid-
ered decisions with no delay and high reliability, making the PSP a suitable approach
for channels that are fast time-varying.
b. Blind techniques, i.e. without knowledge of the training sequence, may be adopted
as part of the PSP to estimate the various parameters.
A variant of the RSSE, belonging to the class of PSP algorithms, is the decision feedback
sequence estimator (DFSE) [15] which considers as reduced state the vector s0k1 formed
simply by truncating the ML state vector at a length N1 N
0T
sk1 D [ak1 ; ak2 ; : : : ; akN1 ] (8.349)
The trellis diagram is built by assuming the reduced state s0k1 . The term hs00k1 ; f00 i repre-
sents the residual ISI that is estimated by considering as s00k1 the symbols that are memo-
rized in the survivor sequence associated with each state. We write
In fact, with respect to z k , it is as if we have cancelled the term hs00k1 ; f00 i by a FB filter
associated with each state s0k1 . With respect to the optimum path, the feedback sequence
s00k1 is expected to be very reliable.
The branch metric of the DFSE is computed as follows:
We note that the reduced state s0k1 may be further reduced by adopting the RSSE tech-
nique (8.346).
The primary difference between an MLSD receiver and the DFSE is that in the trellis
diagram used by the DFSE two paths may merge earlier, as it is sufficient that they share the
8.14. Passband equalizers 697
more recent N1 symbols, rather than N as in MLSD. This increases the error probability;
however, the performance of the DFSE is better than that achieved by a classical DFE.
where, from (7.42) or equivalently from Figure 7.11, qCh is a baseband equivalent pulse
with frequency response
.bb/
QCh . f / D HTx . f / 12 GCh . f / D HTx . f / GCh . f C f 0 / 1. f C f 0 / (8.356)
In this model the phase ' in (8.355) implies that arg QCh .0/ D 0, hence it includes also
the phase offset introduced by the channel frequency response at f D f 0 , in addition to the
carrier phase offset between the transmit and receive carriers. Because of this impairment,
which as a first approximation is equivalent to a rotation of the symbol constellation, a
suitable receiver structure needs to be developed [10].
As ' may be time-varying, it can be decomposed as the sum of three terms
where 1' is a fixed phase offset, 1 f is a fixed frequency offset, and .t/ is a random or
quasi-periodic term (see the definition of phase noise in (4.271)).
For example, over telephone channels typically j .t/j ³=20. Moreover, the highest
frequency of the spectral components of .t/ is usually lower than 0:1=T : in other words,
if .t/ were a sinusoidal signal, it would have a period larger than 10T .
Therefore '.t/ may be regarded as a constant, or at least as slowly time varying, at least
for a time interval equal to the duration of the overall system impulse response.
Figure 8.34. Passband modulation scheme with phase offset introduced by the channel.
698 Chapter 8. Channel equalization and symbol detection
After the passband matched filter, the signal is oversampled with sampling period T =F0 ;
oversampling is suggested by the following two reasons:
. pb/
1. If qCh is unknown and g M is a simple passband filter, matched filtering is carried
out by the filter c. pb/ , hence the need for oversampling.
2. If the timing phase t0 is not accurate, it is convenient to use an FSE.
Let q.t/ be the overall baseband equivalent impulse response of the system at the sam-
pler input,
q.t/ D F 1 [Q. f /] (8.362)
where
Q. f / D QCh . f / G M . f / D HTx . f / GCh . f C f 0 / G M . f / 1. f C f 0 / (8.363)
The sampled passband signal is given by
. pb/ . pb/ T . pb/ T
xn D x I t0 C n C j xQ t0 C n
F0 F0
(8.364)
X
C1 T
j 2³ f 0 n F C'
D ak h nk F0 e 0 C wQ n
kD1
where7
T
h n D q t0 C n (8.365)
F0
In (8.364) wQ n denotes the noise component
T
wQ n D w R t0 C n (8.366)
F0
. pb/
where w R .t/ D w Ł g M .t/.
For a passband equalizer with N coefficients, the output signal with sampling period
equal to T is given by
. pb/
X
N 1
. pb/ . pb/ . pb/T
yk D ci xkF0 i D xk F0 c. pb/ (8.367)
i D0
. pb/
with the usual meaning of the two vectors xn and c. pb/ .
Ideally it should result
. pb/
yk D akD e j .2³ f 0 kT C'/ (8.368)
where the phase offset ' needs to be estimated.
. pb/
In Figure 8.36 the signal yk is shifted to baseband by multiplication with the function
e j .2³ f 0 kT C'/
O , where 'O is an estimate of '. Then the data detector follows.
At this point some observations can be made: by demodulating the received signal, that
is by multiplying it with the function e j2³ f 0 t , before the equalizer or the receive filter
we obtain a scheme equivalent to that of Figure 8.3, with a baseband equalizer. As we
will see at the end of this section, the only advantage of a passband equalizer is that
the computational complexity of the receiver is reduced; in any case, it is desirable to
compensate for the presence of the phase offset, that is to multiply the received signal by
e j 'O , as near as possible to the decision point, so that the delay in the loop for the update
of the phase offset estimate is small.
ek D akD yk (8.369)
J D E[jek j2 ]
. pb/T . pb/ j .2³ f 0 kT C'/
O 2
D E[jakD xk c e j ] (8.370)
. pb/T . pb/ 2
D E[jakD e j .2³ f 0 kT C'/
O
xk c j ]
Equation (8.370) expresses the classical Wiener problem for a desired signal expressed as
where
. pb/Ł . pb/T
R D E[xk xk ] (8.373)
X
C1 (8.374)
D ¦a2 h i h iŁ.`m/ e j2³ f 0 .`m/T C rw R ..` m/T /
i D1
and
. pb/Ł
p D E[akD e j .2³ f 0 kT C'/
O
xk ] (8.375)
8.14. Passband equalizers 701
where
Adaptive method
The adaptive LMS algorithm is used for an instantaneous squared error defined as
. pb/T . pb/ j .2³ f 0 kT C'/
jek j2 D jakD xk c e O 2
j (8.379)
where
. pb/
ek D ek e j .2³ f 0 kT C'Ok / (8.381)
D 2³ f 0 kT C 'O k (8.383)
702 Chapter 8. Channel equalization and symbol detection
then
jek j2 D .akD yk /.akD yk /Ł
(8.384)
. pb/ j . pb/Ł j
D .akD yk e /.akD
Ł
yk e /
Therefore we obtain
@ . pb/ . pb/Ł j
r'O jek j2 D jek j2 D j yk e j ekŁ ek j yk e
@
. pb/ j Ł (8.385)
D 2Im[ek .yk e / ]
D 2Im[ek ykŁ ]
We note that Im[akD ykŁ ] is related to the “sine” of the phase difference between akD and
yk , therefore the algorithm has reached convergence only if the phase of yk coincides (on
average) with that of akD .
The law for updating the phase offset estimate is given by
or
and
where c. pb/ is a complex-valued filter, and the passband input signal x is real-valued.
The overall complexity of filtering and adaptation is lower as compared to the previous
realization, although convergence is slower. A receiver structure with a baseband adaptive
filter is depicted in Figure 8.40.
Figure 8.37. QAM passband receiver for transmission over telephone channels.
704 Chapter 8. Channel equalization and symbol detection
cI (pb) 2
T
g (pb) +
M,I 2
−
cQ(pb) 2 (pb)
T T y k,I
r(t) x(pb) (t) 2 (pb)
g (pb) xn yk
AA
T (pb)
4 yk,Q
cQ(pb) 2
T
g (pb) +
M,Q 2
+
cI (pb) 2
T T
2
Figure 8.38. Implementation of the scheme of Figure 8.37 using real-valued signals and
filters.
c(pb) 4
I T
r(t) x(pb) (t) xn
g (pb) y(pb)
AA k
T
4
c(pb) 4
Q T
Figure 8.39. Efficient implementation of the receiver of Figure 8.37 combining phase splitter
and equalizer.
T
(
cos 2π f0n
4 ) cI 4
T
+
−
cQ 4
r(t) x(pb) (t) xn T
g (pb) yk
AA
T
4 cQ 4
T
+
+
T cI 4
(
−sin 2π f0n
4 ) T
Figure 8.40. Efficient implementation of the QAM receiver using a baseband filter.
8.15. LE for voiceband modems 705
N D F0 L (8.394)
With reference to Figure 8.36 and (8.364), we consider the case in which after the phase
. pb/
splitter, the signal xn is shifted to baseband using a nominal carrier frequency f 0 . From
(8.364) and (8.357), we define
T
. pb/ j2³ f 0 n F
xNn D xn e 0
(8.395)
X
C1 T
j 2³ 1 f n F C1'
D a` h n`F0 e 0 C wN n
`D1
where 1 f and 1' represent, respectively, the carrier frequency offset and phase offset,
both unknown quantities. In (8.395), the noise wN n is given by
T
j2³ f 0 n F
wN n D wQ n e 0 (8.396)
X
C1
vn D a` h n`F0 (8.397)
`D1
at the instant the equalizer outputs the sample yNk , the samples stored in the delay line of
the filter cN are given by
j 2³ 1 f .k F0 i / FT C1'
xNk F0 i D vk F0 i e 0 C wN k F0 i i D 0; 1; : : : ; N 1 (8.398)
Choosing as training sequence fa` g the repetition of a PN sequence with period L, the
sequence fvn g is periodic with period N D F0 L; in particular,
vk F0 N i D vk F0 i (8.399)
Consequently, from (8.395), neglecting the noise wN n , the sample that leaves the equalizer
delay line, xNk F0 N , differs from the sample that enters, xNk F0 , by a phase rotation equal to
2³ 1 f N .T =F0 / D 2³ 1 f L T . On the other hand, noise samples that are separate by an
interval equal to LT are uncorrelated.
We consider now two problems: detection of the beginning of the transmission and
efficient computation of the equalizer coefficients.
1 1 NX w 1 þ
þxNk F n F xNk F N n F e j2³ d
þ
1 f L T þ2
MD 0 0 0 0 (8.400)
MO xN Nw nD0
Because ³ < arg < ³ , j1 df opt j is smaller than 1 ; this fact sets a limit on 1 f . For
2L T
example, for a filter with L D 32 and 1=T D 2400 Baud it must be j1 f j<37:5 Hz.
For d1 f opt given by (8.400), the value of M is computed. If it falls below a certain
threshold then we detect the presence of a full period of the training signal in the equalizer
filter delay line, and we trigger the procedure for computing the equalizer coefficients.
8.15. LE for voiceband modems 707
Considering that in the presence of the training signal we have MxN × ¦w2N , whereas in the
absence of the training signal MxN ' ¦w2N , the metric in (8.402) is much lower than that in
(8.403). In [21] it is observed that a value of Nw D 8 is adequate to reach a false alarm
probability, i.e. the probability of detecting the training sequence in the presence of noise
only, of 104 and a probability of missed detection, i.e. the probability of not detecting
the training sequence, of 106 , even in the presence of severe distortion introduced by the
telephone channel.
X
N 1
ci vk F0 i D akmod L 8k (8.404)
i D0
An estimate vOn of the sample vn is obtained by first compensating for the frequency
offset in xNn , and successively averaging the received signal over Nv periods
1 NXv 1
j2³ d
T
1 f opt .k F0 i m N / F
vOk F0 i D xNk F0 i m N e 0
Nv mD0
(8.405)
NX
i v 1
j2³ d
1 f opt 1 d
xNk F0 i m N e j2³ 1 f opt m L T
k F T
De 0
Nv mD0
X
N 1
ci vOk F0 i D ak k D 0; 1; : : : ; L 1 (8.407)
i D0
708 Chapter 8. Channel equalization and symbol detection
We now take the DFT of both members in the previous equation. Let
X
L1 kp
Ap D ak e j2³ L p D 0; 1; : : : ; L 1 (8.408)
kD0
X
N 1 iq
Cq D ci e j2³ N q D 0; 1; : : : ; N 1 (8.409)
i D0
X
N 1 iq
VO q D vOi e j2³ N q D 0; 1; : : : ; N 1 (8.410)
i D0
then, for N D F0 L, (8.407) becomes
1 FX0 1
Ap D C pCr L VO pCr L p D 0; 1; : : : ; L 1 (8.411)
F0 r D0
The above relation represents a system of L linear equations and N D F0 L unknowns
fCq g. Among the infinite number of solutions we choose the one that minimizes the filter
energy
X
N 1
1 NX
1
jci j2 D jCq j2 (8.412)
i D0
N qD0
For the minimization of (8.412), with the constraint (8.411), the method of the Lagrange
multipliers may be used. We consider here the particular case F0 D 2. For the general case
we refer to [21].
For F0 D 2, we have N D 2L, and the solution is given by
2Aqmod L VO qŁ
Cq D q D 0; 1; : : : ; N 1 (8.413)
jVO qmod L j2 C jVO .LCq/mod L j2
The filter coefficients fci g are obtained by inverse DFT:
1 NX
1 iq
ci D Cq e j2³ N i D 0; 1; : : : ; N 1 (8.414)
N qD0
Summarizing, the algorithm for the computation of the coefficients of an FSE with sampling
period of the input signal equal to T =2 includes the following steps:
1. record Nv sequences of 2L samples from the equalizer delay line;
2. remove the phase rotation as indicated by (8.405);
3. perform a 2L-point DFT to obtain fVq g (fA p g may be pre-computed);
4. compute fCq g as indicated by (8.413);
5. perform a 2L-point inverse DFT to obtain fci g;
6. adjust the coefficients as indicated by (8.406).
8.15. LE for voiceband modems 709
k D 2³.d dk
1 f /k kT C .1'/ (8.415)
T d
.1'/0 D S2³ 1 f opt (8.416)
F0
to compensate for the phase introduced by the shift of the equalizer coefficients.
After transition to data mode, the various parameters are updated by adaptive algorithms,
similar to those derived in Section 8.14.1. In particular, to update the filter coefficients, from
(8.382), we have
where 0 < Þ < 1 is a parameter that allows the LMS algorithm to track small variations
of the channel characteristics (see (3.125)). To update the phase estimate, from (8.387)
we have
Figure 8.42. (a) Discrete-time model; (b) overall model employing the polyphase
representation.
8.16. LE and DFE in the frequency domain with data frames 711
ž White noise sequence fwQ n g with PSD N0 , and polyphase components fwQ k.0/ g
and fwQ k.1/ g.
Let fak g, k D 0; 1; : : : ; M 1, be the transmitted sequence of symbols. To simplify the
implementation of the equalizer, we consider the transmission of an extended sequence of
. px/
symbols fak g, obtained by partially repeating fak g [23]
(
. px/ ak k D 0; 1; : : : ; M 1
ak D (8.423)
aMCk k D 1; : : : ; Npx
In (8.423) Npx is the length of the cyclic prefix, which is related to Nh , length of the channel
impulse response in number of symbol periods T , so that it results
Npx ½ Nh 1 (8.424)
We note that (8.423) assumes a transmission data frame such that, between blocks of data,
there is a guard period within which the data are partially repeated. For a given bandwidth
of the transmission channel, or rather for a given symbol rate, the system introduces an
overhead of Npx symbols every M information symbols.
From the received sequences fx k.`/ g, for k D Npx ; : : : ; 1; 0; 1; : : : ; M 1, ` D 0; 1,
we omit the first Npx samples, and we consider the sequence
(
.`/
.`/ xk k D 0; 1; : : : ; M 1
zk D (8.425)
0 elsewhere
We also introduce the following vectors with M components and the corresponding DFTs:
a D [a0 ; : : : ; aM1 ]T (8.426)
A D [A0 ; : : : ; AM1 ]T D DFT[a] (8.427)
For ` D 0; 1,
h.`/ D [h .`/ .`/
0 ; : : : ; h Nh 1 ; 0; : : : ; 0]
T
(8.428)
H.`/ D [H0.`/ ; : : : ; HM
.`/ T .`/
1 ] D DFT[h ] (8.429)
z.`/ D [z 0.`/ ; : : : ; z M
.`/
1 ]
T
(8.432)
It is easy to verify that if (8.424) holds, the same conditions as in (1.116) are verified,
therefore we have the relation
Zm.`/ D Am Hm
.`/
m D 0; 1; : : : ; M 1 (8.436)
with
m
CNm.1/ D Cm.1/ e j2³ M (8.438)
Finally, we have
y D IDFT[Y] (8.439)
The receiver structure with a linear equalizer in the frequency domain is illustrated in
Figure 8.43; the convolution between x and c is substituted by three M-point DFTs.
The attractiveness of this structure resides in the simplicity of the determination of the
equalizer coefficients to be used in (8.437). A first method, described in Section 8.15, con-
sists of adopting a training sequence of suitable length. Then from the DFTs of the various
signals we determine the DFT of the sequence c (see (8.413)). As an alternative, proposed
in [24], we describe the MSE method (8.12) that for f D q 2M1T =2 , q D 0; 1; : : : ; 2M 1,
yields the DFT of the optimum FSE with sampling period of the input signal equal to T =2.
In this case, we assume as known the impulse response fh i g and also its 2M-point DFT,
Figure 8.43. Structure of a linear equalizer in the frequency domain with data frames using
cyclic prefix.
8.17. Numerical results obtained by simulations 713
¦a2 HqŁ
Cq D GRc . f /j 1 D q D 0; 1; : : : ; 2M 1
f Dq 2MT =2 N0 C ¦a2 21 [jHq j2 C jHqCM j2 ]
(8.440)
Recalling the properties of the DFT and (8.422) we have, for m D 0; 1; : : : ; M 1,
Cm.0/ D 1
2 .Cm C CmCM / (8.441)
and
m 2³
Cm.1/ D 1
2 e j2³ 2M Cm C e j 2M .mCM/ CmCM (8.442)
m
D 1
2 e j³ M .Cm CmCM / (8.443)
m
e j³ M
CNm.1/ D .Cm CmCM / (8.444)
2
We note that, exploiting the data frame structure with cyclic prefix, the implementation of
Figure 8.43 uses DFTs with a number of samples lower than that required by the general
frequency domain method illustrated in Figure 3.22.
A frequency domain DFE, which utilizes a known data sequence as guard interval, rather
than a prefix, has been proposed in [25]. In general, its performance is much better than
that of the above LE configuration. In [25] design methods with a reduced complexity are
also proposed for the direct design of the FF filter in the frequency domain.
We anticipate that the ZF equalizer is designed by the DFT method (see Section 8.7).
We also introduce the abbreviation DFE-VA to indicate the method consisting of the FF
filter of a DFE, followed by MLSD implemented by the VA, with M M2 states determined
by the impulse response n D nCD for n D 0; : : : ; M2 : this technique is commonly used
to shorten the overall channel impulse response and thus simplify the VA (see case (2a) on
page 679).
1. ZF with N D 7 and D D 0;
2. LE with N D 7 and D D 0;
4. VA with 44 D 256 states and path memory depth equal to 15, i.e. approximately three
times the length of fh n g;
The performance of the various receivers is illustrated in Figure 8.44; for comparison, the
performance achieved by transmission over an ideal AWGN channel is also given.
−1
10
−2
10
−3
10
Pbit
−4
10
ZF
LE
10
−5 DFE−VA
DFSE
DFE
VA
AWGN
−6
10
6 7 8 9 10 11 12 13 14 15 16
Γ (dB)
Figure 8.44. Bit error probability, Pbit , as a function of 0 for QPSK transmission over a
minimum phase channel, using various equalization and data detection methods.
8.17. Numerical results obtained by simulations 715
1. ZF with N D 7 and D D 4;
2. LE with N D 7 and D D 4;
−1
10
−2
10
−3
10
Pbit
−4
10
ZF
LE
−5 DFE−VA
10 DFSE
DFE
VA
AWGN
−6
10
6 7 8 9 10 11 12 13 14 15 16
Γ (dB)
Figure 8.45. Bit error probability, Pbit , as a function of 0 for QPSK transmission over a
non-minimum phase channel, using various equalization and data detection methods.
716 Chapter 8. Channel equalization and symbol detection
−1
10
DFE
DFE−VA
DFSE
RSSE
AWGN
−2
10
−3
10
Pbit
−4
10
−5
10
−6
10
11 12 13 14 15 16 17 18 19 20
Γ (dB)
Figure 8.46. Bit probability error, Pbit , as a function of 0 for 8-PSK transmission over a
minimum phase channel, using various equalization and data detection methods.
8.18. Diversity combining techniques 717
−1
10
DFE
DFSE
RSSE
AWGN
−2
10
−3
10
Pbit
−4
10
−5
10
−6
10
11 12 13 14 15 16 17 18 19 20
Γ (dB)
Figure 8.47. Bit probability error rate, Pbit , as a function of 0 for 8-PSK transmission over a
non-minimum phase channel, using various equalization and data detection methods.
In these simulations, the error probability for a DFE-VA is in the range between 0.08 and
0.2 and it is not shown; the performance of the various receivers is illustrated in Figure 8.47.
By comparison of the results of Figure 8.46 and Figure 8.47 we observe that the RSSE
and DFSE may be regarded as valid approximations of the VA as long as the overall channel
impulse response is minimum phase.
Figure 8.48. Receiver with two receive antennas for flat fading radio channels.
Antenna arrays
Let us consider the scheme of Figure 8.4. We generalize it to the case of two receive
antennas: thus, we obtain the scheme of Figure 8.48, where the receive filter for each
branch is the matched filter to the received pulse. For a non-dispersive transmission chan-
nel, assuming absence of ISI, for a single transmit antenna and N ARc receive anten-
nas, at the i-th antenna branch the sampled signal, with sampling period equal to T , is
given by
x k.i / D A ak gC;0
.i /
C wQ k.i / C vQk.i / (8.445)
where ak is the desired symbol, A is the amplitude of the desired pulse at the output
of the downsampler (in the scheme of Figure 8.48 A D E h , the energy of the trans-
.i /
mitted pulse h Tx ), gC;0 is a complex coefficient representing the flat fading channel with
.i / 2
E[jgC;0 j ] D 1, and fwQ k.i / g, i D 1; : : : ; N ARc , are uncorrelated sequences of i.i.d. noise
samples, having variance ¦w2Q .i/ D N0 E h . The sample vQk.i / represents the interference on the
i-th branch due to undesired signals using the same carrier as the desired signal. If N I is
the number of interfering signals, that are assumed synchronous with the desired signal,
we have
NI
X . j/ .i; j/
vQk.i / D A. j/ ak gC;0 (8.446)
jD1
. j/
With regard to the j-th interfering signal, ak is the generic symbol of the transmitted
message, A. j/ is the amplitude of the interfering pulse at the output of the downsampler,
.i; j/
and gC;0 represents the flat fading channel between the j-th interfering antenna and the
i-th receiving antenna.
We also assume that there is no Doppler spread. Therefore the channels are time invariant
for the duration of the transmission. Finally, we recall the reference signal-to-noise ratio at
the decision point for an ideal AWGN channel, MF D 2E h
N0 , given by (7.113).
8.18. Diversity combining techniques 719
The signal at the decision point is usually a linear combination of samples given by
(8.445), that is
N ARc
X .i /
yk D c.i / x k (8.447)
i D1
where fc.i / g, i D 1; : : : ; N ARc , are suitable coefficients. In general, however, the combination
can be made either directly on the received signals frC.i / .t/g, i D 1; : : : ; N ARc , thus realizing
a pre-detection combiner, or on signals after the matched filter as in (8.447), realizing in
this case a post-detection combiner.
We emphasize that the scheme of Figure 8.48 represents the simplest case, with only
two antennas, of a structure, or array, that in general has N ARc antennas. Depending on the
placement of elements, arrays are classified as: 1) linear if the elements are aligned on a
straight line; 2) circular if the elements are placed on a circle; 3) planar if the elements
are placed on a grid. More complex structures also exist.
Combining techniques
For the selection of the coefficients in (8.447) we consider two switching techniques and
three combining techniques. To simplify the analysis we assume vQk.i / absent or included
in wQ k.i / .
1. Selective combining. Only one of the received signals is selected. Let i SC be the branch
corresponding to the received signal with highest statistical power,
i SC D arg max M
.i/ (8.448)
i 2f1;:::;N ARc g rC
where the different powers are estimated using a training sequence. In some cases, in place
of the statistical power the bit error probability or the receive signal-to-noise ratio8 is used
as the decision parameter. Based on the decision (8.448), the receiver selects the antenna
i SC and consequently extracts the signal x k.iSC / aligned in phase.
With reference to (8.447), this method is equivalent to selecting
(
.i /Ł
.i / gC;0 i D i SC
c D (8.449)
0 i 6D i SC
and
.i
SC /Ł .i /
yk D gC;0 x k SC
þ .iSC / þ2 (8.450)
D þgC;0 þ A ak C g .iSC /Ł wQ .iSC /
C;0 k
At the decision point, we have
þ .iSC / þ2
SC D MF þgC;0 þ (8.451)
8 This parameter can be estimated by the estimate of the channel impulse response (see Appendix 3.B)
720 Chapter 8. Channel equalization and symbol detection
A variant of this technique consists in selecting only two or three signals; next, their
combination takes place.
2. Switched combining. Another antenna is selected only when the statistical power of the
signal, or equivalently the bit error probability, or the receive signal-to-noise ratio, drops
below a given threshold. Once a new antenna is selected, the signal is processed as in the
previous case.
3. Equal gain combining (EGC). In this case the signals are only aligned in phase; therefore
we have
.i/ .i/Ł
c.i / D e j arg.gC;0 / D e j arg.gC;0 / i D 1; : : : ; N ARc (8.452)
It results in
0 1
N ARc N ARc
X þ .i / þ X
yk D A ak @ þ þ
gC;0 C A c.i / wQ k.i / (8.453)
i D1 i D1
which yields
0 12
N ARc
X þ .i / þ
@ þ g þA
C;0
i D1
EGC D MF (8.454)
N ARc
This technique is often used in receivers of DPSK signals (see Section 6.5); in this case the
combining is obtained by summing the differentially demodulated signals on the various
branches.
4. Maximal ratio combining (MRC). We assume absence of interferers. The MRC criterion
consists in maximizing the signal-to-noise ratio at the decision point. Substituting (8.445)
in (8.447) and taking the expectation with respect to the message and the various noise
signals, we get
þ þ2
þ NX þ
þ ARc .i / .i / þ
þ c g þ
þ C;0 þ
þ i D1 þ
D 2E h2 (8.455)
N ARc
X þ þ2
þc.i / þ ¦ 2 .i/
wQ
i D1
Using the Schwarz inequality (see page 4), the signal-to-noise in (8.455) is maximized for
.i /Ł
gC;0
c.i / D K i D 1; : : : ; N ARc (8.456)
¦wQ .i/
where K is a constant.
8.18. Diversity combining techniques 721
Because the noise signals of the various branches have the same variance, the choice
.i /Ł
c.i / D K gC;0 (8.457)
that is, the total signal-to-noise ratio is the sum of the signal-to-noise ratios of the individual
branches.
It is interesting to note that the choice (8.457) is also the solution of the maximum
likelihood criterion associated with signals (8.445).
In the case of uncorrelated channels with a Rayleigh statistic, the performance, in terms
of bit error probability of the various combining techniques can be obtained analytically by
applying the technique described in Section 6.12 [26, 27, 28].
5. Optimum combining (OC). The MRC criterion performs well in situations where the
desired signal power is much greater than the power of the interfering signals; otherwise the
effect of interference must be considered. In the OC criterion the ratio between the power
of the desired signal and the power of the interference plus noise (SINR) is maximized.
In practice the coefficients fc.i / g, i D 1; : : : ; N ARc , are determined by the Wiener formu-
lation, where yk is the sample at the decision point and ak is the desired symbol. Defining
.N ARc / T
xk D [x k.1/ ; : : : ; x k ]
(8.461)
c D [c.1/ ; : : : ; c.N ARc / ]T
we have
yk D xkT c (8.462)
Recalling the expression of the autocorrelation matrix
R D E[xŁk xkT ] (8.463)
and the cross-correlation vector
p D E[ak xŁk ] (8.464)
722 Chapter 8. Channel equalization and symbol detection
c D R1 p (8.465)
.N A /
xk D [x k.1/ ; : : : ; x kN
.1/ .2/ .2/
C1 ; x k ; : : : ; x kN C1 ; : : : ; x kN C1 ]
Rc T
(8.467)
Diversity in transmission
We now discuss some diversity methods in which N ATx transmit antennas and one receive
antenna are employed [51, 52, 53].
8.18. Diversity combining techniques 723
First, we distinguish close-loop methods, with feedback from receiver, from open-loop
methods, without feedback. Obviously in the first case there is a return channel to com-
municate the selected parameters to the transmitter; the drawback of these methods is that,
besides requiring a certain capacity, the parameters of the transmission channel are known
with a certain delay.
2. Transmit array. The signals sent over the various antennas are multiplied by coefficients
fc.i / g, i D 1; : : : ; N ATx , computed at the receiver (see Figure 8.50). At the receiver the
criterion can be the MRC or the OC.
In order not to increase the average transmitted power, the coefficients must be scaled
so that
N ATx
X þ þ2
þc.i / þ D 1 (8.468)
i D1
In [52] a method is presented to determine the coefficients which minimize the receiver
bit error probability under a constraint on the total transmitted power or on the power
transmitted by each antenna.
Tx,1
Rc
ak s(t) a^ k
hTx DEMOD
Tx,2
Tx,1
Rc
c (1)
ak s(t) a^ k
hTx DEMOD
c (2) Tx,2
Tx,1
g (1)
C,0
Rc x (1) c (1)
* (-t) k
hTx,1 (t) hTx,1
ak Tx,2 yk a^ k
g (2)
C,0
x (2) c (2)
hTx,2 (t) *
hTx,2 k
(-t)
x k.i / D A ak gC;0
.i /
C wQ k.i / i D 1; 2 (8.469)
Combining the various outputs using the coefficients
.i /Ł
c.i / D gC;0 (8.470)
we get
If the noise signals wQ k.i / , i D 1; : : : ; N ATx , are uncorrelated, and if each antenna can
transmit a signal with maximum power, this scheme has the same performance of an MRC
scheme, in terms of ; for the same total transmitted power, instead, it loses 10 log10 N ATx
dB. Another drawback lies in the use of at least two orthogonal signals per user. A variant
of the system is proposed in [51].
4. Delay diversity. Let s.t/ be the modulated signal, as shown in Figure 8.52. The i-th
antenna transmits the delayed signal
s.t i T / i D 0; : : : ; N ATx 1 (8.472)
At the receiver, the message is detected from the resulting signal with ISI by MLSD [54].
5. Time switched transmit diversity. As illustrated in Figure 8.53, the transmitter selects
in turn the antenna on which to transmit a symbol sequence. The method is much simpler
than the previous one, however, it requires the use of channel coding and interleaving (see
Chapter 11) to recover the errors introduced by some channels.
8.18. Diversity combining techniques 725
Tx,1
Tx,2
Rc
Tx,1
Tx,2 Rc
ak s(t) a^ k
hTx DEMOD
Tx,NA
Tx
Tx,1
g (1)
C,0
Rc
a 2n , a *2n+1 , a 2n+2 , a *2n+3 hTx
Tx,2 a^ k
g (2) DEMOD
C,0
* , a
a 2n+1 , a2n a*
2n+3 , 2n+2 hTx
6. Space-time transmit diversity. The basic scheme is illustrated in Figure 8.54 [53]. The
message fak g is transmitted over two antennas: over antenna 1 the transmitted signal is
modulated by the data sequence
a2n Ł
a2nC1 ;::: (8.473)
and over antenna 2 the transmitted data sequence is modulated by
a2nC1 Ł
a2n ;::: (8.474)
726 Chapter 8. Channel equalization and symbol detection
For a receiver with matched filter to the pulse h Tx , at the decision point we have the signal
8
< A.a2n g .1/ C a2nC1 g .2/ / C wQ 2n for k D 2n
C;0 C;0
xk D (8.475)
: A.a Ł .1/ Ł .2/
2nC1 gC;0 C a2n gC;0 / C w Q 2nC1 for k D 2n C 1
Assuming the channels are known, we consider the combination of the samples
.1/Ł .2/ Ł
y2n D gC;0 x2n C gC;0 x2nC1 (8.476)
and
.2/Ł .1/ Ł
y2nC1 D gC;0 x2n gC;0 x2nC1 (8.477)
A comparison with the MRC technique leads to the same considerations made on (8.471).
We conclude this section by mentioning diversity techniques with more than one transmit
and receive antenna, called space-time coding techniques, whereby the message is coded
by using suitable channel codes [55, 56, 57].
Bibliography
[1] G. Ungerboeck, “Fractionally tap-spacing and consequences for clock recovery in data
modems”, IEEE Trans. on Communications, Aug. 1976.
[3] J. E. Mazo and J. Salz, “Probability of error quadratic detector”, Bell System Technical
Journal, vol. 44, Nov. 1965.
[4] J. G. Proakis, Digital communications. New York: McGraw-Hill, 3rd ed., 1995.
[6] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for
minimizing symbol error rate”, IEEE Trans. on Information Theory, vol. 20, pp. 284–
287, Mar. 1974.
8. Bibliography 727
[7] A. J. Viterbi, “An intuitive justification and simplified implementation of the MAP
decoder for convolutional codes”, IEEE Journal on Selected Areas in Communications,
vol. 16, pp. 260–264, Feb. 1998.
[8] G. Ungerboeck, “Adaptive maximum likelihood receiver for carrier modulated data
transmission systems”, IEEE Trans. on Communications, vol. 22, pp. 624–635, May
1974.
[9] G. E. Bottomley and S. Chennakeshu, “Unification of MLSE receivers and extension
to time-varying channels”, IEEE Trans. on Communications, vol. 46, pp. 464–472,
Apr. 1998.
[10] D. G. Messerschmitt and E. A. Lee, Digital communication. Boston, MA: Kluwer
Academic Publishers, 2nd ed., 1994.
[11] K. M. Chugg and A. Anastasopoulos, “On symbol error probability bounds for ISI-like
channels”, IEEE Trans. on Communications, vol. 49, pp. 1704–1709, Oct. 2001.
[14] J. B. Anderson and S. Mohan, “Sequential coding algorithms: a survey and cost
analysis”, IEEE Trans. on Communications, vol. 32, pp. 169–176, Feb. 1984.
[15] M. V. Eyuboglu and S. U. H. Qureshi, “Reduced-state sequence estimator with set
partitioning and decision feedback”, IEEE Trans. on Communications, vol. 36, pp. 13–
20, Jan. 1988.
[16] M. V. Eyuboglu, “Reduced-state sequence estimator for coded modulation on inter-
symbol interference channels”, IEEE Trans. on Communications, vol. 7, pp. 989–995,
Aug. 1989.
[17] R. E. Kamel and Y. Bar-Ness, “Reduced-complexity sequence estimation using state
partitioning”, IEEE Trans. on Communications, vol. 44, pp. 1057–1063, Sept. 1996.
[18] W. Sheen and G. L. Stuber, “Error probability for reduced-state sequence estimation”,
IEEE Journal on Selected Areas in Communications, vol. 10, pp. 571–578, April 1992.
[19] R. Raheli, A. Polydoros, and C.-K. Tzou, “Per-survivor processing: a general ap-
proach to MLSE in uncertain environments”, IEEE Trans. on Communications, vol. 43,
pp. 354–364, Feb.–Apr. 1995.
[25] N. Benvenuto and S. Tomasin, “On the comparison between OFDM and single carrier
modulation with a DFE using a frequency domain feedforward filter”, IEEE Trans. on
Communications, 2002.
[30] J. H. Winters, “Optimum combining in digital mobile radio with cochannel interfer-
ence”, IEEE Journal on Selected Areas in Communications, vol. 2, pp. 528–539, July
1984.
[31] J. H. Winters, J. Salz, and R. D. Gitlin, “The impact of antenna diversity on the ca-
pacity of wireless communication systems”, IEEE Trans. on Communications, vol. 42,
pp. 1740–1751, Feb.–Apr. 1994.
[32] W. C. Jakes, Microwave mobile communications. New York: IEEE Press, 1993.
[36] R. T. Compton, Adaptive antennas: concepts and performance. Englewood Cliffs, NJ:
Prentice-Hall, 1988.
[41] I. P. Kirsteins and D. W. Tufts, “Adaptive detection using low rank approximation to
a data matrix”, IEEE Trans. on Aerospace and Electronic Systems, vol. 30, pp. 55–67,
Jan. 1994.
[43] W. C. Y. Lee, “Effects on correlation between two mobile radio base-station antennas”,
IEEE Trans. on Communications, vol. 21, pp. 1214–1224, Nov. 1973.
[44] R. A. Monzingo and T. W. Miller, Introduction to adaptive arrays. New York: John
Wiley & Sons, 1980.
[46] J. Salz and J. H. Winters, “Effect of fading correlation on adaptive arrays in digital
wireless communications”, IEEE Trans. on Vehicular Technology, vol. 43, pp. 1049–
1057, Nov. 1994.
[47] G. Tsoulos, M. Beach, and J. McGeehan, “Wireless personal communications for the
21st century: European technological advances in adaptive antennas”, IEEE Commu-
nications Magazine, vol. 35, pp. 102–109, Sept. 1997.
730 Chapter 8. Channel equalization and symbol detection
[59] S. Benedetto and E. Biglieri, Principles of digital transmission with wireless applica-
tions. New York: Kluwer Academic Publishers, 1999.
[60] N. Al-Dhahir and J. M. Cioffi, “MMSE decision-feedback equalizers: finite-length
results”, IEEE Trans. on Information Theory, vol. 41, pp. 961–975, July 1995.
8.A. Calculus of variations and receiver optimization 731
The first part of this appendix introduces the calculus of variations [58]. This technique
will be applied to solve two optimization problems as described in the second part of the
appendix.
Linear functional
Definition 8.2
Let x.t/, t 2 <, be a real-valued signal. We define linear functional of x the following
integral:
Z C1
Lx D g.t/x.t/ dt (8.479)
1
where g.t/ is also a real-valued signal.
By (1.20) we represent the signal x by referring to a complete orthornormal basis fi .t/g,
i 2 I, with I finite or numerable set, that is
X
x.t/ D xi i .t/ (8.480)
i 2I
where
Z C1
xi D hx.t/; i .t/i D x.t/i .t/ dt i 2I (8.481)
1
are the signal components with respect to the basis.
Substitution of (8.480) in (8.479) yields
Z C1 " #
X
Lx D g.t/ xi i .t/ dt
1 i 2I
Z ½ (8.482)
X C1
D xi g.t/i .t/ dt
i 2I 1
where we assume the order between the summation and integration can be exchanged. Let
Z C1
gi D hg.t/; i .t/i D g.t/i .t/ dt i 2I (8.483)
1
D g.t/
using (8.485).
As we will see later, it is convenient to express the functional in the frequency domain;
applying the Parseval theorem, we get
Z C1 Z C1
Lx D g.t/x.t/ dt D G. f /X Ł . f / d f D LX (8.490)
1 1
Quadratic functional
Definition 8.3
Let x.t/, t 2 <, be a real-valued signal. We define quadratic functional of x the integral
Z C1 Z C1
Qx D x.t/A.t; − /x.− / d− dt (8.492)
1 1
Qx D xT Cx (8.493)
Defining
Z C1
Ai .− / D hA.t; − /; i .t/i D A.t; − /i .t/ dt j 2I (8.495)
1
Analogously, defining
Z C1
A0j .t/ D hA.t; − /; j .− /i D A.t; − / j .− / d− j 2I (8.497)
1
then
X
A.t; − / D A0j .t/ j .− / (8.498)
j2I
The gradient of (8.493) is not equal to that computed in (2.30): in fact, in general, the
matrix C is not symmetric and the vector x is real-valued. Hence, we obtain
X X Z C1 Z C1 ½
D i ./[A.; − / C A.−; /] d j .− / d− x j i .t/
i 2I j2I 1 1
(8.502)
Substituting (8.495) and (8.498) in the above equation, changing variables, and using (8.496)
and (8.498), we obtain
X X Z C1 ½
rx Qx D [Ai .− / C Ai .− /] j .− / d− x j i .t/
0
i 2I j2I 1
Z " #2 3
C1 X X
D [Ai .− / C Ai0 .− /]i .t/ 4 x j j .− /5 d− (8.503)
1 i 2I j2I
Z C1
D [A.t; − / C A.−; t/]x.− / d−
1
Also in this case we express the quadratic functional in the frequency domain by applying
the Parseval theorem:
Z C1 Z C1
Qx D X . f /B. f; ¹/X Ł .¹/ d f d¹ D QX (8.504)
1 1
To conclude, we give an example of a quadratic functional that will be useful in the next
subsection.
Example 8.A.1
We consider the following quadratic functional in the frequency domain:
Z C1
QX D P. f /jX . f /j2 d f (8.507)
1
8.A. Calculus of variations and receiver optimization 735
Defining r.− / D F 1 [P. f /], it can be verified that the corresponding time and frequency
operators are
rX QX D 2P. f /X . f / (8.510)
QC . f / D HT x . f /GC . f / (8.511)
J D E[jyk ak j2 ] (8.512)
h i D q R .t0 C i T / (8.514)
qC
t0 +kT
ak rC (t) yk a^k =Q[yk ]
hTx gC gRc
wC (t)
H. f / D Q R . f /e j2³ f t0 (8.516)
Q R . f / D QC . f /G Rc . f / (8.517)
X
C1 X
C1 X
C1
D E[ai a Łj h ki h Łk j ] C ¦w2 R C ¦a2 C 2 E[ai h ki wŁR;k ] (8.518)
i D1 jD1 i D1
X
C1
2 E[ai h ki akŁ ] 2E[w R;k akŁ ]
i D1
From the above assumptions, the fourth and the sixth term in the last expression are iden-
tically zero; therefore
X
C1 X
C1 X
C1
J D ¦w2 R C ra .i j/h ki h Łk j 2 ra .i k/h ki C ¦a2 (8.519)
i D1 jD1 i D1
X
C1 X
C1 X
C1
J D ¦w2 R C ra .q p/h p h qŁ 2 ra . p/h p C ¦a2
pD1 qD1 pD1
(8.520)
X
C1 X
C1 X
C1
D ¦w2 R C ra . p q/h Łp h q 2 ra . p/h Łp C ¦a2
pD1 qD1 pD1
It is convenient to express the above terms in the frequency domain, using the following
relations:
ž power spectral density of the message
X
C1
Pa . f / D T ra .n/e j2³ f nT (8.521)
nD1
8.A. Calculus of variations and receiver optimization 737
a) The first term is a quadratic cost function. From (8.510) the gradient with respect to
H. f / is given by
PwC . f /
2 H. f / (8.527)
jQC . f /j2
b) The second term is also a quadratic functional, with the difference with respect to
the previous term that a transformation given by a periodic repetition is performed
on H. f /; it can be proven that the gradient with respect to H. f / is
X
C1
2 `
Pa . f / H f (8.528)
T2 `D1
T
rH J D 0 (8.531)
X
C1
PwC . f / 2 ` 2
rH J D 2 H. f / C Pa . f / H f Pa . f / D 0 (8.532)
jQC . f /j2 T 2
`D1
T T
Now we introduce the signal G. f / that coincides with the first term of (8.532),
PwC . f /
G. f / D H. f / (8.533)
jQC . f /j2
As both the second and the third term on the left-hand side of (8.532) are periodic functions
of period 1=T , then also G. f / is a periodic function of period 1=T . Substituting therefore
(8.533) in (8.532), it must be
þ þ
þ
þ ` þþ2
QC f
1 X
C1
` þ T þ 1
G. f / C 2 Pa . f / G f Pa . f / D 0 (8.534)
T T ` T
`D1 PwC f
T
8.A. Calculus of variations and receiver optimization 739
Considering that the function G. f / is periodic, it can be brought out of the equation; after
some steps we get the optimum solution for G. f /:
Pa . f /
G. f / D þ þ (8.535)
þ
þ ` þþ2
C1 þ C Q f
1 X T þ
T C Pa . f /
T `D1 `
PwC f
T
Hence, using (8.533),
jQC . f /j2
H. f / D G. f /
PwC . f /
jQC . f /j2 Pa . f /
D þ þ
PwC . f / þ ` þþ2 (8.536)
C1 þQC
þ f
1 X T þ
T C Pa . f /
T `D1 `
PwC f
T
Finally, using (8.517), the expression of the optimum receive filter is given by
QCŁ.f/
Pa . f /
G Rc . f / D e j2³ f t0 þ þ (8.537)
PwC . f / þ ` þþ2
C1 þQC
þ f
X T þ
T C Pa . f / T1
`D1 Pw
`
C f
T
Hence, (8.11) is proven.
However, even for the particular case of i.i.d. input symbols, for which
Z
¦ 2 C1
MT x D a jHT x . f /j2 d f (8.539)
T 1
740 Chapter 8. Channel equalization and symbol detection
the problem cannot be solved in closed form. Here we give the procedure to determine the
solution [58, 59].
1
1. For each frequency f 2 [ 2T ; 2T
1
], determine the integer `max such that
þ þ2
þGC f ` þ
þ þ
þ T þ
`max . f / D arg max (8.540)
`2Z `
PwC f
T
b D [b1 ; b2 ; : : : ; b M2 ]T (8.544)
bQ D [1; bT ]T
and the vectors
xk D [x k ; x k1 ; : : : ; x kM1 C1 ]T
The filter coefficients c and bQ are determined by minimizing the cost function
J D E[jyk akD j2 ] (8.549)
742 Chapter 8. Channel equalization and symbol detection
Ra D E[aŁk akT ]
Rx D E[xŁk xkT ]
(8.550)
Rax D E[aŁk xkT ]
Rxa D E[xŁk akT ]
On the other hand, minimization of the MSE implies orthogonality of the error ek with
respect to the input xk . Consequently, using the expression (8.548) for ek it must be
or
bQ H Rax D c H Rx (8.553)
We recognize that
is the correlation matrix of the estimation error vector (see Appendix 2.A).
The problem thus reduces to finding the vector bQ that minimizes the quadratic form
J D bQ H Rajx bQ (8.557)
Defining e1 D [1; 01ðM2 ]T and remembering that bQ1 , the first component of b,Q must be
equal to 1, the solution is obtained by the minimization of the quadratic function
J D bQ H Rajx bQ (8.558)
8.B. DFE design: matrix formulations 743
b
3
-1 b
1
b2
v D e1 bQ1 C e1 D 0 (8.559)
Figure 8.56 illustrates the problem in the case M2 D 2. The vector bQ that minimizes J
subject to the constraint v D 0 is obtained by the method of Lagrange multipliers,
(
rbQ J C ½rbQ v D 0
(8.560)
vD0
copt D R1 Q
x Rxa bopt (8.563)
Although this procedure is very general, valid for a general statistic of the information
message, it is rather computationally expensive because it requires two matrix inversions,
for Rajx and Rx .
744 Chapter 8. Channel equalization and symbol detection
Observation 8.15
For an LE, the formulation is as in (8.563), however, without the filter b, and with aO k D
[aO kD ] and bQ D [1].
8.B.2 Method based on the channel impulse response and i.i.d. symbols
This method is obtained by applying a matrix formulation similar to that of Section 8.5.
We introduce a definition that will simplify the notation.
Definition 8.4
Let q D [q1 ; : : : ; q N ]T be a vector with N elements, and
qi j D [qi ; qi C1 ; : : : ; q j ]T (8.564)
denote the vector containing a subsequence of consecutive elements of q.
Let A D [An;m ], n D 1; : : : ; N R , m D 1; : : : ; NC , be a N R ð NC matrix. If Až;m denotes
the m-th column, then
Až;i j D [Až;i ; Až;i C1 ; : : : ; Až; j ] (8.565)
denotes the matrix containing a subsequence of consecutive columns of the matrix A.
Figure 8.57 illustrates the vector representation of the scheme of Figure 8.5, extended
to include a DFE.
Let fh i g, i D N1 ; : : : ; N2 , be the overall impulse response at the equalizer input.
Introducing the vectors
xk D [x k ; x k1 ; : : : ; x kM1 C1 ]T
where
2 3
h N1 h N1 C1 h N1 C2 : : : h N2 0 0 ::: 0
6 7
6 0 h N1 h N1 C1 : : : h N2 1 h N2 0 ::: 0 7
6 7
6 0 0 h N1 : : : h N2 2 h N2 1 h N2 ::: 0 7
HD6 7 (8.568)
6 :: :: :: :: 7
6 : : 7
4 : : 5
0 0 0 : : : h N1 h N1 C1 h N2
where M3 D M1 C N2 D 1 M2 .
The sample yk at the decision point can thus be expressed as
Q k D .cT H C b0 T /ak C cT w
yk D cT xk C b0 T ak D cT Hak C b0 T ak C cT w Qk (8.570)
Equation (8.570) suggests that the cancellation of ISI from the sample yk , due to the symbols
fakDi g, i D 1; : : : ; M2 , is obtained by setting
N1 CDCM2 C1
b0 D [01ð.N1 CDC1/ ; .cT H/ N 1 CDC2
; 01ðM3 ]T (8.571)
yk D cT .H0 ak C w
Q k/ (8.572)
where
We are now back to the case of one filter c, having as input vector .H0 ak C w
Q k /, whose
coefficients are determined by minimizing the cost function
From the Wiener theory, we know that to find the solution we must compute the autocor-
relation matrix R of the input signal and the cross-correlation vector p between the desired
output signal and the input signal. From
Q k /Ł .H0 ak C w
R D E[.H0 ak C w Q k /T ] (8.575)
assuming the symbols of the sequence fak g are i.i.d. with variance ¦a2 , we obtain
p D E[akD .H0 ak C w
Q k /Ł ] D H0 Ł E[akD aŁk ] (8.577)
given the statistical independence between transmitted symbols and noise. As the symbols
are i.i.d., the vector E[akD aŁk ] has all zero elements except that in position N1 C D C 1,
with value ¦a2 , and p corresponds to the .N1 C DC1/-th column of the matrix H0 Ł multiplied
by the scalar ¦a2 ,
0Ł
p D ¦a2 Hž;N1 CDC1 D ¦a2 [h D ; h D1 ; : : : ; h N1 ; 01ðM1 .N1 CDC1/ ] H (8.578)
According to the Wiener theory, the optimum FF filter coefficients are given by
copt D R1 p
0Ł
(8.579)
D .¦a2 H0 Ł H0 T C RwQ /1 ¦a2 Hž;N1 CDC1
Observation 8.16
For an LE, the formulation is as in (8.579) and (8.580), however, without the filter b, and
with H0 equal to H.
Ra D E[aŁk akT ]
Rx D E[xŁk xkT ] D HŁ Ra HT C RwQ
(8.581)
Rax D E[aŁk xkT ] D Ra HT
Rxa D E[xŁk akT ] D HŁ Ra
8.B. DFE design: matrix formulations 747
Setting
2 3
0.M2 C1/ð.N1 CD/
T 1 Ł 1 6 7
R D D [0.M2 C1/ð.N1 CD/ ; I M2 C1 ; 0.M2 C1/ðM3 ][R1
a C H Rw
Q H ] 4 I M2 C1 5
0.M2 C1/ðM3
(8.583)
(8.582) becomes
½
1
J D [1; b H ]R D (8.584)
b
R1
D e1
[1; bopt;1 ; : : : ; bopt;M2 ]T D (8.585)
e1T R1
D e1
With this choice for the FB filter, the minimum value of the MSE is given by
1
Jmin D (8.586)
e1T R1
D e1
Observation 8.17
We note that if the number of coefficients M2 of the FB filter is chosen equal to the length of
the overall channel impulse response at the decision point, that is if M2 D N1 C N2 C 1, the
performance is improved; moreover, it is possible to reduce the computational complexity
by exploiting the special structure of the matrices [60].
8.B.4 FS-DFE
Although the FB filter necessarily operates with input samples having sampling period equal
to T , the FF filter can be fractionally spaced. According to the Wiener theory, the optimum
coefficients of c and b are obtained by one of the two methods presented in the previous
sections; however, care must be taken in accounting for the fact that the FF filter operates
with input samples having sampling period equal to T =F0 , where F0 is the oversampling
factor.
748 Chapter 8. Channel equalization and symbol detection
where
The input signal to the filter c is still denoted by the vector xk , whose components are
the vectors fxi g, i D k; : : : ; k M1 1, each with dimension F0 . Then we can write
2 3
xk
6
xk D 4 :: 7
: 5 D Hak C w Qk (8.592)
xkM1 1
where w
Q k follows the same structure of xk , and
2 3
hN1 hN1 C1 hN1 C2 : : : h N2 0 0 ::: 0
6 7
6 0 hN1 hN1 C1 : : : h N2 1 h N2 0 ::: 0 7
6 7
6 0 0 hN1 h N2 2 h N2 1 h N2 0 7
HD6 7 (8.593)
6 : : :: :: 7
6 : :: : 7
4 : : 5
0 0 0 : : : hN1 hN1 C1 h N2
Note that the above relation is formally identical to (8.567). Therefore, thanks to this
property, we can obtain the optimum coefficients of the FF and FB filters by the methods
introduced in the previous sections.
8.C. Equalization based on the peak value of ISI 749
The equalization algorithm discussed in this appendix is related to the eye diagram at the
decision point [4]. For an equalizer with N coefficients, let n be the overall impulse
response at the decision point:
X
N 1
n D c j h n j n D N1 ; : : : ; N2 C N 1 (8.594)
jD0
n D nCD (8.595)
Set
L 1 D N1 C D L 2 D N2 C N 1 D (8.598)
L D L2 C L1 C 1 (8.599)
the cost function J to be minimized considers the peak value of ISI, that is
L2
X
1
JD jn j (8.600)
0 nDL 1 ; n6D0
ak
i=0 i=1 i=2
0 0
0 1 (L−1) k
{h n(i)}
0 1 0 1 0 1 n
{ηn(i)}
0 1 0 1 0 1 n
If we denote by fn.i / g the impulse response at the i-th iteration, the law of coefficient
update by the gradient method is given by
c.ij C1/ D c.ij / ¼ sgn .ijD
/
j D 0; : : : ; N 1 j 6D D (8.602)
X
N 1
c.iDC1/ D 1 c.ij C1/ h D j (8.603)
jD0; j6D D
We consider a discrete-time system with input sequence fi k g and output sequence fok g,
evaluated at instants kT: We say that the output sequence is generated by an FSM if there
are a sequence of states fsk g and two functions f o and f s , such that
ok D f o .i k ; sk1 / (8.605)
sk D f s .i k ; sk1 / (8.606)
as illustrated in Figure 8.59. The first equation describes the fact that the output sample
depends on the current input and the state of the system. The second equation represents
the memory part of the FSM and describes the state evolution.
We note that if f s is a one-to-one function of i k , that is if every transition between two
states is determined by a unique input value, then (8.605) can be written as
Chapter 9
Orthogonal frequency
division multiplexing
For channels that exhibit high signal attenuation at frequencies within the passband, a valid
alternative to CAP/QAM is represented by a modulation technique based on filter banks,
known as orthogonal frequency division multiplexing (OFDM), or multicarrier modulation.
As the term implies, multicarrier modulation is obtained in principle by modulating several
carriers in parallel using blocks of symbols, therefore using a symbol period that is typically
much longer than the symbol period of a single-carrier system transmitting at the same
bit rate. The resulting narrowband signals around the frequencies of the carriers are then
added and transmitted over the channel. The narrowband signals are usually referred to as
subchannel signals.
An advantage of OFDM with respect to single-carrier systems is represented by the
lower complexity required for equalization, that under certain conditions can be performed
by a filter with a single coefficient per subchannel. A long symbol period also yields a
greater immunity of an OFDM system to impulse noise; however the symbol duration, and
hence the number of subchannels, are limited for transmission over time-variant channels.
As we will see in this chapter, another important aspect is represented by the efficient
implementation of modulator and demodulator, obtained by sophisticated signal processing
algorithms.1
1 In this text, we refer to OFDM as a general technique for multicarrier transmission. In particular, this applies to
both wired and wireless systems. When multicarrier transmission is achieved without filtering of the individual
subchannel signals, other authors use the term discrete multitone (DMT) modulation for wired transmission
systems, whereas they reserve the term OFDM for wireless systems.
754 Chapter 9. Orthogonal frequency division multiplexing
X
C1
Hi .z/ D h n [i] z n (9.2)
nD1
We observe that the support of fgn [i]g is f1; 2; : : : ; Mg, and, for an ideal channel, D0 D 0.
Time domain
With reference to the general scheme of Figure 9.1, at the output of the j-th receive
subchannel, before downsampling, the impulse response relative to the i-th input is given by
M
X1 M
X1
h p [i] gn p [ j] D h p [i] h Ł MC pn [ j] 8n (9.5)
pD0 pD0
We note that, for j D i, the peak of the sequence given by (9.5) is obtained for n D M.
Observing (9.5), transmission in the absence of intersymbol interference over a subchannel,
as well as absence of interchannel interference (ICI) between subchannels, is achieved if
orthogonality conditions are satisfied, that in the time domain are expressed as
M
X1
h p [i] h ŁpCM. k/ [ j] D Ži j Žk i; j D 0; : : : ; M 1 (9.6)
pD0
Hence, in the ideal channel case considered here, the vector sequence at the output of the
decimator filter bank is a replica of the transmitted vector sequence with a delay of
modulation intervals, that is fyk g D fak g. Sometimes the elements of a set of orthogonal
impulse responses that satisfy (9.6) are called wavelets.
Frequency domain
In the frequency domain, the conditions (9.6) are expressed as
M
X 1
M M
Hi f ` HŁj f ` D Ži j i; j D 0; : : : ; M 1 (9.7)
`D0
T T
z-transform domain
In the ideal channel case, the relation between the inputs ak [i], i D 0; : : : ; M 1, and the
j-th output yk [ j] is represented by the block diagram of Figure 9.2. We note that every
756 Chapter 9. Orthogonal frequency division multiplexing
a k [0]
M H0 (z) Gj (z) M
T T T T
M M
a k [1]
M H1 (z) Gj (z) M
T T T T
. M M
.
.. ..
a k [M-1] y k [j]
M HM (z) Gj (z) M
T T -1 T T
M M
Figure 9.2. Block diagram that illustrates the relation between the M inputs and the j-th
output.
and
" #T
X X
y.z/ D yk [0]z k
;:::; yk [M 1]z k
(9.9)
k k
where the element [S.z/] p;q of the matrix S.z/ is the transfer function between the q-th
input and the p-th output. We note that the orthogonality conditions, expressed in the time
domain by (9.6), are satisfied if
S.z/ D z I (9.11)
where E i.`/ .z/ is the transfer function of `-th polyphase component of Hi .z/. We define the
vector of the transfer functions of transmit filters as
h.z/ D [H0 .z/; : : : ; HM1 .z/] T (9.13)
Let e.z/ be the vector of delay elements given by
e.z/ D [z .M1/ ; z .M2/ ; : : : ; 1]T (9.14)
and E.z/ the matrix of the transfer functions of the polyphase components of the transmit
filters, given by
2 .0/ .0/ .0/ 3
E 0 .z/ E 1 .z/ ÐÐÐ E M1 .z/
6 7
6
6 E 0.1/ .z/ E 1.1/ .z/ ÐÐÐ EM.1/
1 .z/
7
7
6
E.z/ D 6 7 (9.15)
:: :: :: 7
6 : : : 7
4 5
.M1/ .M1/ .M1/
E0 .z/ E1 .z/ ÐÐÐ E M1 .z/
From the identity
.M1/ H 1
z e D [1; z 1 ; : : : ; z .M1/ ] (9.16)
zŁ
h.z/ can be expressed as
1
hT .z/ D z .M1/ e H E.z M / (9.17)
zŁ
Observing (9.17), the M interpolated input sequences are filtered by filters whose transfer
functions are represented in terms of the polyphase components, expressed as columns of
the matrix E.z M /. Because the transmitted signal fsn g is given by the sum of the filter
outputs, the operations of the transmit filter bank are illustrated by the scheme of Figure 9.3,
where the signal fsn g is obtained from a delay line that collects the outputs of the M vectors
of filters with transfer functions given by the rows of E.z M /, and input given by the vector
of interpolated input symbols.
At the receiver, using a representation equivalent to (1.637), that is obtained by a per-
mutation of the polyphase components, we get
M
X 1
G i .z/ D z .M1`/ Ri.`/ .z M / i D 0; : : : ; M 1 (9.18)
`D0
758 Chapter 9. Orthogonal frequency division multiplexing
where Ri.`/ .z/ is the transfer function of the .M 1 `/-th polyphase component
of G i .z/. Then the vector of the transfer functions of the receive filters g.z/ D
[G 0 .z/; : : : ; G M1 .z/] T can be expressed as
Observing (9.19), the M signals at the output of the receive filter bank before down-
sampling are obtained by filtering in parallel the received signal by filters whose transfer
functions are represented in terms of the polyphase components, expressed as rows of the
matrix R.z M /. Therefore the M output signals are equivalently obtained by filtering the
vector of received signal samples [Nrn.M1/ ; rNnMC2 , : : : ; rNn ]T by the M vectors of filters
with transfer functions given by the rows of the matrix R.z M /.
In particular, recalling that the receive filters have impulse responses given by (9.4),
we obtain
1
G i .z/ D z M HiŁ Ł 0i M1 (9.21)
z
Substituting (9.21) in (9.19), and using (9.17), the expression of the vector g.z/ of the
transfer functions of the receive filters becomes
1 1
g.z/ D z M h H D z MCM1 H
E e.z/ (9.22)
zŁ z ŁM
9.3. Efficient implementation of OFDM systems 759
a k [0] yk [0]
T T
a k [1] yk [1]
T T
rn = s n rn H
E(z) P/S z -D
0 S/P z −(γ−1)
E 1
.
.
T
M
( )
z*
.
.
. .
a [M -1] yk M
[ -1]
k
T T
a) b)
Figure 9.5. Block diagrams of (a) parallel to serial converter, (b) serial to parallel converter.
Apart from a delay term z 1 , the operations of the receive filter bank are illustrated by the
scheme of Figure 9.3.
From Figure 9.3 using (9.16), (9.22), and applying the noble identities given in
Figure 1.70, we obtain the system represented in Figure 9.4, where parallel to serial (P/S)
and serial to parallel (S/P) converters are illustrated in Figures 9.5a and 9.5b, respectively.
a k[0] y k[0]
T T
a k[1] y k[1]
T T
S(z)
a k[ –1] y k[ –1]
T T
Figure 9.6. Equivalent OFDM system with input–output relation expressed in terms of the
matrix S.z/.
From (9.11), in terms of the polyphase components of the transmit filters in (9.15), the
orthogonality conditions in the z-transform domain are therefore expressed as
1
EH E.z/ D I (9.24)
zŁ
Equivalently, using the polyphase representation of the receive filters in (9.20), we find the
conditions
H 1
R.z/ R DI (9.25)
zŁ
i
fi D i D 0; : : : ; M 1 (9.26)
T
In other words, the spacing in frequency between the subcarriers is 1 f D 1=T .
To derive an efficient implementation of uniform filter banks we consider the scheme
represented in Figure 9.7a, where H .z/ and G.z/ are the transfer functions of the transmit
and receive prototype filters, respectively. We consider as transmit prototype filter a causal
FIR filter with impulse response fh n g, having length M and support f0; 1; : : : ; M 1g;
the receive prototype filter is a causal FIR filter matched to the transmit prototype filter,
with impulse response given by
gn D h Ł Mn 8n (9.27)
9.3. Efficient implementation of OFDM systems 761
Figure 9.7. Block diagram of an OFDM system with uniform filter banks: (a) general scheme,
(b) equivalent scheme for fi D i=T, i D 0; : : : ; M 1.
With reference to Figure 9.7a, the i-th subchannel signal at the channel input is given by
in X
1
sn [i] D e j2³ M ak [i] h nk M (9.28)
kD1
As
in i .nk M/
e j2³ M D e j2³ M (9.29)
we obtain
X
1 i .nk M/ X
1
sn [i] D ak [i] h nk M e j2³ M D ak [i] h nk M [i] (9.30)
kD1 kD1
where
in
h n [i] D h n e j2³ M (9.31)
2³
Recalling the definition WM D e j M , the z-transform of fh n [i]g is expressed as
Hi .z/ D H .zW M
i
/ i D 0; : : : ; M 1 (9.32)
Observing (9.30) and (9.32), we obtain the equivalent scheme of Figure 9.7b.
762 Chapter 9. Orthogonal frequency division multiplexing
The scheme of Figure 9.7b may be considered as a particular case of the general OFDM
scheme represented in Figure 9.1; in particular, the transfer functions of the filters can be
expressed using the polyphase representations, which are, however, simplified with respect
to the general case expressed by (9.15) and (9.20), as we show in the following. Observing
(9.28) we express the overall signal sn as
M
X 1 in X
C1
sn D e j2³ M h nk M ak [i] (9.33)
i D0 kD1
X
C1 M
X 1
sm.`/ D h .`/
mk
i `
WM ak [i] (9.35)
kD1 i D0
Recalling the definition (1.94) of the DFT operator as M ð M matrix, the IDFT of the
vector ak is expressed as
1 M
X 1
i `
WM ak [i] D Ak [`] ` D 0; 1; : : : ; M 1 (9.38)
M i D0
and
X
1 X
1
sm.`/ D M h .`/
mk Ak [`] D M h .`/
p Am p [`] (9.39)
kD1 pD1
Including the factor M in the definition of the prototype filter impulse response, or in M
gain factors that establish the statistical power levels to be assigned to each subchannel
signal, an efficient implementation of a uniform transmit filter bank is given by an IDFT,
a polyphase network with M branches, and a P/S converter, as illustrated in Figure 9.8.
9.3. Efficient implementation of OFDM systems 763
(0)
a k [0] Ak [0] sk
(0)
H (z)
T T
Ak [1] (1)
a k [1] (1) sk
H (z)
T T
sn rn
IDFT P/S GC(z)
T
M w
n
Ak [M-1] (M-1)
a k [M -1] (M -1)
sk
H (z)
T T
(0) ^a [0]
z (γ−1) H (0)* 1
rk y [0] k
k
T z* T T
(1) ^a [1]
z (γ−1) H (1)* 1
rk y k [1] k
rn rn T z* T T
z -D 0
S/P DFT . .
. .
. .
(M-1) ^a [M -1]
(M-1)* 1
z (γ−1) H
rk yk [ M -1] k
T z* T T
Observing (9.35), we find that the matrix of the transfer functions of the polyphase
components of the transmit filters is given by
Therefore the vector of the transfer functions of the transmit filters is expressed as
.M1/ H 1
h .z/ D z
T
e diagfH .0/ .z M /; : : : ; H .M1/ .z M /gF1
M (9.41)
zŁ
We note that we would arrive at the same result by applying the notion of prototype filter
with the condition (9.26) to (9.15).
The vector of the transfer functions of a receive filter bank, which employs a prototype
filter with impulse response given by (9.27), is immediately obtained by applying (9.22),
with the matrix of the transfer functions of the polyphase components given by (9.40).
Therefore we get
² ¦
1 1
g.z/ D z MCM1 FM diag H .0/ .M1/ Ł
Ł
; : : : ; H e.z/ (9.42)
zŁM zŁM
764 Chapter 9. Orthogonal frequency division multiplexing
X
C1 i T
yk [i] D gk Mn e j2³ T n M rn (9.43)
nD1
i
Observing that e j2³ M m M D 1, setting rm.`/ D rm MC` , and h .`/
Ł
m D h Łm MC` , and inter-
changing the order of equations, we get
M
X 1 2³ X
1
e j M i ` h .`/
Ł
.`/
yk [i] D Cmk r m (9.45)
`D0 mD1
2³
Using the relation e j M i ` D WM
i ` , we finally find the expression
M
X 1 X
1
h .`/
Ł
i` .`/
yk [i] D WM Cmk r m (9.46)
`D0 mD1
Provided the orthogonality conditions are satisfied, from the output samples yk [i], i D
0; : : : ; M 1, threshold detectors may be employed to yield the detected symbols aO k [i],
i D 0; : : : ; M 1, with a certain delay D.
As illustrated in Figure 9.8, all filtering operations are carried out at the low rate 1=T .
Also note that in practice the FFT and the inverse FFT are used in place of the DFT and
the IDFT, respectively, thus further reducing the computational complexity.
Figure 9.9. Block diagram of (a) transmitter and (b) receiver in a transmission system
employing non-critically sampled filter banks, with K > M and fi D .iK/=.MT/ D i=.MTc /.
of transmit and receive non-critically sampled filter banks are illustrated in Figure 9.9. As
in critically sampled systems, also in non-critically sampled systems it is advantageous to
choose each transmit filter as the frequency-shifted version of a prototype filter with impulse
response fh n g, defined over a discrete-time domain with sampling period Tc D T =K. At
the receiver, each filter is the frequency-shifted version of a prototype filter with impulse
response fgn g, also defined over a discrete-time domain with sampling period Tc D T =K.
As depicted in Figure 9.10, each subchannel filter has a bandwidth equal to K=.MT /,
larger than 1=T . Maintaining a spacing between subcarriers of 1 f D K=.MT /, it is easier
to avoid spectral overlapping between subchannels and consequently to avoid ICI. It is
also possible to choose fh n g, e.g., as the impulse response of a square root raised cosine
filter, such that, at least for an ideal channel, the orthogonality conditions are satisfied and
ISI is also avoided. We note that this advantage is obtained at the expense of a larger
766 Chapter 9. Orthogonal frequency division multiplexing
bandwidth required for the transmission channel, that changes from M=T for critically
sampled systems to K=T for non-critically sampled systems. Therefore the system requires
an excess bandwidth given by .K M/=M.
Also for non-critically sampled filter banks it is possible to obtain an efficient imple-
mentation using the discrete Fourier transform [3, 4]. The transmitted signal is expressed
as a function of the input symbol sequences as
M
X 1 iK T X
1
sn D e j2³ MT n K ak [i] h nk K (9.47)
i D0 kD1
or, equivalently,
X
1 M
X 1
in
sn D h nk K ak [i] WM (9.48)
kD1 i D0
Using the definition of the IDFT (9.38), apart from a factor M that can be included in the
impulse response of the filter, and introducing the following polyphase representation of
the transmitted signal
sm.`/ D sm MC` (9.51)
we obtain
X
1
sm.`/ D h m Mk KC` Ak [`] (9.52)
kD1
By analogy with (1.561), (9.52) is obtained by interpolation of the sequence fAk [`]g by a
factor K, followed by decimation by a factor M. From (1.569) and (1.570), we introduce
the change of indices
¼ ¹
mM
pD k (9.53)
K
9.4. Non-critically sampled filter banks 767
and
¼ ¹
mM mM .mM/mod K
1m D D (9.54)
K K K
Using (1.576) it results in
X
C1
sm.`/ D h . pC1m /KC` Aj mM k [`]
K p
pD1
X
C1
D h pKC`C.m M/mod K Aj mM k [`]
K p
pD1
Letting
h .`/
p;m D h p KC`C.m M/mod K p; m 2 Z ` D 0; 1; : : : ; M 1 (9.55)
we obtain
X
1
sm.`/ D h .`/ j
p;m A mM
k [`] (9.56)
K p
pD0
The efficient implementation of the transmit filter bank is illustrated in Figure 9.11. We
note that the system is now periodically time-varying, i.e. the impulse response of the fil-
ter components cyclically changes. The M elements of an IDFT output vector are input
to M delay lines. Also note that within a modulation interval of duration T , the sam-
ples stored in some of the delay lines are used to produce more than one sample of the
transmitted signal. Therefore the P/S element used for the realization of critically sampled
filter banks needs to be replaced by a commutator. At instant nT =K, the commutator is
linked to the ` D n mod M -th filtering element. The transmit signal sn is then computed by
convolving the signal samples stored in the `-th delay line with the n mod K -th polyphase
component of the T =K-spaced-coefficients prototype filter. In other terms, each element of
the IDFT output frame is filtered by a periodically time-varying filter with period equal to
[l :c:m:.M; K/]T =K, where l :c:m:.M; K/ denotes the least common multiple of M and K.
Likewise, the non-critically sampled filter bank at the receiver can also be efficiently
implemented using the DFT. In particular, we consider the case of downsampling of the
subchannel output signals by a factor K=2, which yields samples at each subchannel output
at an (over)sampling rate equal to 2=T . With reference to Figure 9.9b, we observe that the
output sequence of the i-th subchannel is given by
X
1 in
yn 0 [i] D gn 0 K n e j2³ M rn (9.57)
2
nD1
where gn D h Ł Mn .
With the change of indices
n D mM C ` m2Z ` D 0; 1; : : : ; M 1 (9.58)
We note that in (9.59) the term within parenthesis may be viewed as an interpolation by a
factor M followed by a decimation by a factor K=2.
Letting
¼ 0 ¹
nK
qD m (9.60)
2M
and
¼ 0 ¹
n0K nK .n 0 K=2/mod M
1n 0 D D (9.61)
2M 2M M
the terms within parenthesis in (9.59) can be written as
X
1
gq MC.n 0 K /mod M ` rj.`/n0 K=2 k (9.62)
qD1
2
M q
(9.59) becomes
M
X 1
yn 0 [i] D u n.`/ i`
0 WM (9.65)
`D0
The efficient implementation of the receive filter bank is illustrated in Figure 9.12,
where we assume for the received signal the same sampling rate of K=T as for the trans-
mitted signal, and a downsampling factor K=2, so that the samples at each subchannel
output are obtained at a sampling rate equal to 2=T . Note that the delay element z D0
at the receiver input has been omitted, as the optimum timing phase for each subchannel
can be recovered by using per-subchannel fractionally spaced equalization, as discussed
in Section 8.4 for single-carrier modulation. Also note that within a modulation interval
of duration T , more than one sample is stored in some of the delay lines to produce
the DFT input vectors. Therefore the S/P element used for the realization of critically
sampled filter banks needs to be replaced by a commutator. After the M elements of
a DFT input vector are produced, the commutator is circularly rotated K=2 steps clock-
wise from its current position, allowing a set of K=2 consecutive received samples to
be input into the delay lines. The content of each delay line is then convolved with
one of the M polyphase components of the T =K-spaced-coefficients receive prototype
filter. A similar structure is obtained if in general a downsampling factor K0 K is
considered.
We consider three simple examples of critically sampled filter bank modulation systems.
For practical applications, equalization techniques and possibly non-critically sampled filter
bank realizations are required, as will be discussed in the following sections.
770 Chapter 9. Orthogonal frequency division multiplexing
and we can easily verify that the orthogonality conditions (9.6) are satisfied.
As shown in Figure 9.13, because the frequency responses of the polyphase components
are constant, we obtain directly the transmit signal by applying a P/S conversion at the
output of the IDFT. Assuming an ideal channel, at the receiver a S/P converter forms
blocks of M samples, with boundaries between blocks placed so that each block at the
output of the IDFT at the transmitter is presented unchanged at the input of the DFT. At
the DFT output, the input blocks of M symbols are reproduced without distortion with a
delay equal to T . We note, however, that the orthogonality conditions are satisfied only if
the channel is ideal.
From the frequency response of the prototype filter,
T M1 T sin.³ f T /
H e j2³ f M D e j2³ f 2 M (9.68)
sin.³ f T =M/
using (9.32) the frequency responses of the individual subchannel filters Hi .z/ are obtained.
Figure 9.14 shows the amplitude
of the frequency responses of adjacent subchannel filters,
M
obtained for f 2 0; 0:06 T and M D 64. We note that the choice of a rectangular win-
dow of length M as impulse response of the baseband prototype filter leads to a significant
overlapping of spectral components of transmitted signals in adjacent subchannels.
T T
T T
sn rn rn
IDFT P/S -D S/P DFT
. T
GC(z) z 0 .
. .
M w
n
. .
T T
Figure 9.13. Block diagram of an OFDM system with impulse response of the prototype filter
given by a rectangular window of length M.
9.5. Examples of OFDM systems 771
Figure 9.14. Amplitude of the frequency responses of adjacent subchannel filters in a DMT
Ð
system for f 2 0; 0:06 M
T and M D 64. [From [4], 2002 IEEE]
c
root raised cosine filter, with Nyquist frequency equal to M=.2T /, is selected to implement
the interpolator filter and the anti-aliasing filter.
We recall now the conditions to obtain a real-valued transmitted signal fsn g. For OFDM
systems with the efficient implementation illustrated in Figure 9.8, it is sufficient that the
coefficients of the prototype filter and the samples Ak [i], 8k, i D 0; : : : ; M 1, are real-
valued. Observing (9.38), the latter condition implies that the following Hermitian symmetry
conditions must be satisfied
½
M
ak [0] ak 2<
2 (9.70)
M
Ł
ak [i] D ak [M i] i D 1; : : : ; 1
2
In this case, the symmetry conditions (9.70) also allow a further reduction of the imple-
mentation complexity of the IDFT and DFT.
When fsn g is a complex signal, the scheme of Figure 9.18 is adopted, where the filters
gT x and g Rc have the characteristics described above and f 0 is the carrier frequency.
We note that this scheme is analogous to that of a QAM system, with the difference that
the transmit and receive lowpass filters, with impulse responses gT x and g Rc , respectively,
have different requirements. The baseband equivalent scheme shown in Figure 9.18b is
obtained from the passband scheme by the method discussed in Chapter 7. As the receive
filters approximate an ideal lowpass filter with bandwidth M=.2T /, the signal-to-noise ratio
0 at the channel output is assumed equal to that obtained at the output of the baseband
sn s rn
C, n
T
gC T
M M
wn
Figure 9.18. (a) Passband analog OFDM transmission scheme; (b) baseband equivalent
scheme.
9.6. Equalization of OFDM systems 775
MsC
0D (9.71)
N0 M=T
Figure 9.19. Block diagram of a DMT system with cyclic prefix and frequency-domain
equalizer.
776 Chapter 9. Orthogonal frequency division multiplexing
the P/S conversion, where the Nc 1 samples of the cyclic extension are the first to be
sent, the Nc 1 C M samples are transmitted over the channel. At the receiver, blocks of
samples of length Nc 1 C M are taken; the boundaries between blocks are set so that
the last M samples depend only on the elements of only one cyclically extended block of
samples. The first Nc 1 samples of a block are discarded.
We now recall the result (1.116). The vector rk of the last M samples of the block
received at the k-th modulation interval is expressed as
rk D k gC C wk (9.72)
where gC D [gC;0 ; : : : ; gC;Nc 1 ; 0; : : : ; 0]T is the M-component vector of the channel
impulse response extended with M Nc zeros, wk is a vector of additive white Gaussian
noise samples, and k is an M ð M circulant matrix, given by
2 3
Ak [0] Ak [M 1] ÐÐÐ Ak [1]
6 Ak [1] Ak [0] ÐÐÐ Ak [2] 7
6 7
k D 66 7 (9.73)
:: :: :: 7
4 : : : 5
Ak [M 1] Ak [M 2] ÐÐÐ Ak [0]
Equation (9.72) is obtained by observing that only the elements of the first Nc columns
of the matrix k contribute to the convolution that determines the vector rk , as the last
M Nc elements of gC are equal to zero. The elements of the last M Nc columns of
the matrix k are chosen so that the matrix is circulant, even though they might have been
chosen arbitrarily. Moreover, we observe that the matrix k , being circulant, satisfies the
relation
2 3
ak [0] 0 ÐÐÐ 0
6 0 ak [1] Ð Ð Ð 0 7
6 7
FM k F1 6 7 D diagfak g (9.74)
M D 6 :: :: :: 7
4 : : : 5
0 0 ÐÐÐ ak [M 1]
Defining the DFT of the vector gC as
GC D [GC;0 ; GC;1 ; : : : ; GC;M1 ]T D FM gC (9.75)
and using (9.74), we find that the demodulator output is given by
xk D FM rk D diagfak g GC C Wk (9.76)
where Wk D FM wk is given by the DFT of the vector wk . Recalling the properties of wk ,
Wk is a vector of independent Gaussian r.v.s.
Equalizing the channel using the zero-forcing criterion, the signal xk (9.76) is multiplied
by the diagonal matrix K, whose elements on the diagonal are given by2
1
Ki D [K]i;i D i D 0; 1; : : : ; M 1 (9.77)
GC;i
2 To be precise, the operation indicated by (9.77), rather than equalizing the signal, that is received in the absence
of ISI, normalizes the amplitude and adjusts the phase of the desired signal.
9.6. Equalization of OFDM systems 777
We assume that the sequence of input symbol vectors fak g is a sequence of i.i.d. ran-
dom vectors. Equation (9.78) shows that the sequence fak g can be detected by assuming
transmission over M independent and orthogonal subchannels in the presence of additive
white Gaussian noise.
A drawback of this simple equalization scheme is the reduction in the modulation rate
by a factor .M C Nc 1/=M. Therefore it is essential that the length of the channel
impulse response is much smaller than the number of subchannels, so that the reduction of
the modulation rate due to the cyclic extension can be considered negligible.
To reduce the length of the channel impulse response one approach is to equalize the
channel before demodulation [8, 10, 11]. With reference to Figure 9.18b, a linear equalizer
with input rn is used; it is usually chosen as the FF filter of a DFE that is determined
by imposing a prefixed length of the feedback filter, smaller than the length of the cyclic
prefix.
X
C1 NX
c 1
h overall ;n [i] D gn Mn 1 [i] h n 1 n 2 [i] gC;n 2 n2Z (9.79)
n 1 D1 n 2 D0
In the given scheme, the M DFEs depend on the transmission channel. If the transmission
channel is time variant, each DFE must be able to track the channel variations, or it must
be recomputed periodically. Error propagation inherent to decision-feedback equalization
can be avoided by resorting to precoding techniques, as discussed in Chapter 13. The
application of precoding techniques in conjunction with trellis-coded modulation (TCM)
for FMT transmission is addressed in [4].
Figure 9.20. Per-subchannel equalization for an FMT system with non-critically sampled filter
banks.
of subchannels is sufficiently high, and the group delay in the passband of the transmission
channel is approximately constant, the frequency response of every subchannel becomes
approximately a constant. In this case, the effect of the transmission channel is that of
multiplying every subchannel signal by a complex value. Therefore, as for DMT systems
with cyclic prefix, equalization of the transmission channel can be performed by choosing a
suitable constant for every subchannel. We note, however, that, whereas for a DMT system
with cyclic prefix the model of the transmission channel as a multiplicative constant for each
subchannel is exact if the length of the cyclic prefix is larger than the length of the channel
impulse response, for an FMT system such a model is valid only as an approximation. The
degree of the approximation depends on the dispersion of the transmission channel and on
the number M of subchannels.
Assuming a constant frequency response for transmission over each subchannel, the
equalization scheme is given in Figure 9.21a, where K i is defined in (9.77), and the DFE
is designed to equalize only the cascade of the transmit and receive filters. Using (9.27)
and (9.31) we find that the convolution of transmit and receive filters is independent of the
subchannel index: in fact, we obtain
X
1 X
1
h n 1 [i] gn Mn 1 [i] D h n 1 gn Mn 1 D h eq;n (9.80)
n 1 D0 n 1 D0
Figure 9.21. (a) Equalization scheme for FMT in the case of approximately constant frequency
response for transmission over each subchannel; (b) simplified scheme.
general, a DFE scheme can be used, where the `-th polyphase components of the receive
filters, g .`/ .`/
F F and g F B , equalize the corresponding i-th polyphase component h
.`/ of the
Clock synchronization guarantees alignment of the timing phase at the receiver with that
at the transmitter; frame synchronization, on the other hand, extracts from the sequence of
received samples the blocks of M C Nc 1 samples that form the received frames, and
determines the boundaries of the sequence of vectors rk that are presented at the input of
the DFT. In principle, for a channel input sequence given by s0 D 1, and sn D 0, n 6D 0, the
channel impulse response of length Nc must appear in the first Nc positions of the receive
vector (see Figure 9.19).
For the initial convergence of both synchronization processes, training sequences without
cyclic prefix are usually employed [12]. For FMT systems with non-critically sampled filter
banks and fractionally-spaced equalization, the synchronization is limited to clock recovery
(see Section 8.4).
Figure 9.22. Block diagram of an SSB modulator for a passband DWMT signal.
9.8. Passband OFDM systems 781
roll-off around DC we obtain a vestigial side band (VSB) modulator.3 However, because of
the difficult recovery of the phase and frequency of the carrier, digital transmission systems
using SSB and VSB modulators are characterized by lower performance as compared to
systems that consider transmission of the double-sided signal spectrum. To overcome this
difficulty and preserve the spectral efficiency of the transmission scheme, a pilot tone may be
used to provide the required information for the carrier recovery. The transmission of pilot
tones, however, does not represent in many cases a practical solution, as it reduces the power
efficiency of the system and introduces one or more spectral lines in the signal spectrum.
Multiple access DMT and FMT systems. Other difficulties arise, however, in the case of
transmission in multiple-access networks. Then two or more users transmit signals simul-
taneously over subsets of the available subchannels. We recall that in DMT systems the
channel impulse response needs to be shortened to reduce the length of the cyclic exten-
sion. Consequently in a multiple-access system the impulse response of each user’s channel
must be shortened. We observe that, even if a cyclic extension of sufficient length is used,
the orthogonality conditions are satisfied only if the subchannel signals are synchronous.
Because of the spectral overlapping between signals on adjacent subchannels in a DMT sys-
tem, a signal that is presented at the receiver input with an incorrect timing phase violates
the orthogonality conditions, and disturbs many other subchannels: this situation cannot
be avoided, for example, when a station sends a signal over a given subchannel without
knowledge of the propagation delay.
To solve the problems raised by the transmission of DMT signals in a multiple-access
network, we resort to FMT systems, which present large attenuation of the signal spec-
trum outside the allocated subchannels. In this manner, ICI is avoided even if the various
subchannel signals are received from stations without knowledge of the propagation delay.
3 SSB and VSB modulations are used, for example, for the analog transmission of video signals and can also
be considered for digital communication systems.
782 Chapter 9. Orthogonal frequency division multiplexing
In practice, however, OFDM systems offer some considerable advantages with respect to
CAP/QAM systems.
ž OFDM systems achieve higher spectral efficiency if the channel frequency response
exhibits large attenuations at frequencies within the passband. In fact, the band used
for transmission can be varied by increments equal to the modulation rate 1=T Hz,
and optimized for each channel. Moreover, if the noise exhibits strong components
in certain regions of the spectrum, the total band can be subdivided in two or more
sub-bands.
ž OFDM systems guarantee a higher robustness with respect to impulse noise. If the
average arrival rate of the pulses is lower than the modulation rate, the margin against
the impulse noise is of the order of 10 log10 .M/ dB.
ž For typical values of M, OFDM systems achieve the same performance as QAM
systems with a complexity that can be considerably lower.
ž In multiple-access systems, the finer granularity of OFDM systems allows a greater
flexibility in the spectrum allocation.
On the other hand, OFDM systems present also a few drawbacks with respect to QAM
systems.
ž In OFDM systems the transmitted signals exhibit a higher peak-to-average power
ratio, that contributes to an increase in the susceptibility of these systems to non-
linear distortion.
ž Because of the block processing of samples, a higher latency is introduced by OFDM
systems in the transmission of information.
Figure 9.23. Amplitude of the frequency responses of the filters of an OFDM system with
2M subchannels.
Figure 9.24. Amplitude of the frequency responses of the filters shifted in frequency.
784 Chapter 9. Orthogonal frequency division multiplexing
We set
iC
1
Ui .z/ D þ i P0 z W 2M2 D þi Q i .z/ 0i M1 (9.84)
1
Ð
iC
Vi .z/ D þ iŁ P0 z W 2M 2 D þiŁ Q 2M1i .z/ 0i M1 (9.85)
D þiŁ Q iŁ .z Ł / (9.86)
and we define the transfer functions of a new filter bank with M transmit filters, real input
symbol sequences, and modulation rate equal to 2=T , as
In the previous equations Þi and þi are constants with absolute value equal to one. The
amplitude of the frequency response of the filter Hi .z/ is illustrated in Figure 9.25. We
note that Hi .z/ has a frequency response with positive frequency content due to U i .z/, and
negative frequency content due to Vi .z/.
We assume that the original prototype filter P0 .z/ is an FIR filter with length M and
transfer function given by
M
X1
P0 .z/ D pn .0/ z n (9.88)
nD0
The M filters defined by (9.87) are also FIR filters of length M and transfer functions
defined as
M
X1
Hi .z/ D h n [i] z n 0i M1 (9.89)
nD0
Because the coefficients of P0 .z/ are real-valued, the coefficients of U i .z/ are obtained as
the complex conjugate of the coefficients of Vi .z/. Consequently in (9.87) the coefficients
h n [i], i D 0; : : : ; M 1, are real-valued.
We assume, moreover, that the prototype filter P0 .z/ is a linear-phase filter and that the
relation p M1n [0] D pn [0] holds. Therefore we get
1
P0Ł Ł D z . M1/ P0 .z/ (9.90)
z
The frequency response of the filter can be expressed as
2³ f T M1 T 2³ f T
P0 e j 2M D e j2³ f 2 2M PR e j 2M (9.91)
2³ f T
where PR e j 2M is a real-valued function.
We choose the values of the constants þi so that Ui .z/ and Vi .z/ have the same linear
phase as P0 .z/; observing that
M1
fT 1
iC 2 M1 T j2³ 2M 2M
f T i C1=2
j2³ 2M 2 j2³ f
Ui e D þi W2M e 2 2M PR e (9.92)
we let
1 M1
iC 2 2
þi D W2M (9.93)
Therefore we get
2³ f T M1 T j2³ 2M 2M
f T i C1=2
Ui e j 2M De j2³ f 2 2M PR e
(9.94)
2³ f T M1 T j2³ 2M C 2M
f T i C1=2
Vi e j 2M D e j2³ f 2 2M PR e
and the functions Ui .z/ and Vi .z/ indeed exhibit the same linear phase as P0 .z/. Because
Ui .z/ and Vi .z/ have a linear phase, analogously to (9.90), the following relations hold:
1
Ui Ł
D z . M1/ Ui .z/
zŁ
(9.95)
1 . M1/
ViŁ
Dz Vi .z/
zŁ
Moreover, we assume that the receive filters are matched, that is gn [i] D h Ł Mn [i] D
h Mn [i], 0 i M 1. Hence the transfer functions of the receive filters are given by
1
G j .z/ D z M H jŁ Ł D z M H j .z 1 / 0 j M1 (9.96)
z
From (9.87) we get
1 M
X 1 1
`
1
`
[G j .z/H jC1 .z/] # M D G j z M WM H jC1 z M WM
M `D0
1 M
X 1 1
`
1 1
`
1
`
D z M WM Þ Łj U j .z M WM / C Þ j V j .z M WM /
M `D0
1 1
` `
ð Þ jC1 U jC1 z M WM C Þ ŁjC1 V jC1 z M WM
H3
H2
H1
H0
1111
0000 1111
0000
0000
1111 0000
1111
0000
1111 0000
1111
0000
1111 0000
1111
0000
1111 0000
1111
0000
1111 0000
1111
0000
1111 0000
1111
1
0000
1111 0000
1111 1 T
l =0 0 l =0 f
2 2 2M
(a)
l =3 l =0 l =1 l =2 l =3
1 2 3
0 1 fT
4 4 4 2
l =1 l =2 l =3 l =0 l =1
(b)
Figure 9.26. (a) Amplitude of the filter frequency responses for M D 4; (b) spectral com-
ponents of ICI evaluated from the i-th input to the j-th output, for j D 1 and i D 2, after
downsampling (see (9.98)).
9.9. DWMT modulation 787
1 1 1
1 M
X
`
1
`
1
`
' z M WM Þ Łj Þ jC1 U j z M WM U jC1 z M WM
M `D0
1
1 1 1
` ` `
C z M WM Þ j Þ ŁjC1 V j z M WM V jC1 z M WM (9.98)
where we have used the observation that in the case of ideal filters the functions U j1 and
V j2 do not overlap in frequency. Therefore we assume that their product is negligible. From
the definition (9.85) of the function V j .z/ and from (9.82) we get
1 `C jC1 1
`
V j z M WM D þ Łj Q jC1 z M WM
(9.99)
1 `C jC1 1
`
V jC1 z M WM D þ ŁjC1 Q j z M WM
Substituting the previous equations in (9.98), observing (9.85) and (9.86), and using the
periodicity of WM` , we obtain
1 1 1
1 M
X
`
[G j .z/H jC1 .z/] # M D z M WM [Þ Łj Þ jC1 þ j þ jC1
M `D0
1 1
. jC1/ ` `
C Þ j Þ ŁjC1 þ Łj þ ŁjC1 WM ]Q j z M WM Q jC1 z M WM
(9.100)
The condition to suppress the ICI in the j-th subchannel due to the signal transmitted
on the . j C 1/-th subchannel is therefore given by
. jC1/
Þ Łj Þ jC1 þ j þ jC1 C Þ j Þ ŁjC1 þ Łj þ ŁjC1 WM D0 (9.101)
Analogously, for the suppression of the ICI in the j-th subchannel due to the signal
transmitted on the . j 1/-th channel, the condition Þ Łj Þ j1 D Þ j Þ Łj1 is found. We note
that, setting Þ j D e j' j , we find that the condition for the approximate suppression of the
ICI can be expressed as
³
' j D ' j1 š (9.103)
2
Equation (9.103) sets a constraint on the sequence of the phases of the constants Þ j ,
0 j M 1; to define the whole sequence it is necessary to determine the phase of Þ0 .
From (9.87) and (9.97), we observe that
where the products U j .z/ V j .z/ are negligible except for j D 0 and j D M 1. In these
þ 2³ f T þ
þ þ
cases, to avoid that the function þ H0 e j 2M þ is distorted at frequencies near zero and
þ 2³ f T þ
þ þ
that the function þHM1 e j 2M þ is distorted at frequencies near M=T , it must be
Þ 2j C Þ Ł2
j D0 j D 0; M 1 (9.105)
Therefore we choose
a04 D aM
4
1 D 1 (9.106)
Summarizing, for the design of the system we may start from a prototype filter P0 .z/
that approximates a square root raised cosine filter with Nyquist frequency 1=.2T /. This
leads to verifying the condition (9.108). The M subchannel filters are obtained using
(9.84), (9.85), and (9.87), where þi is defined in (9.93) and the phase of Þi is given
in (9.107).
An efficient implementation of the transmit filter bank is illustrated in Figure 9.27, where
P .i / .z/ are the 2M polyphase components of the prototype filter P0 .z/, and d i D Þi þi ,
i D 0; 1; : : : ; M 1. An efficient implementation of the receive filter bank is illustrated in
Figure 9.28.
where
1 M1
i ³ iC 2
e j .1/ 4
2
di D Þi þi D W2M i D 0; : : : ; M 1 (9.110)
9.9. DWMT modulation 789
ak [0] sn
M P (0)(-z 2M
)
T T T
2 2M -1/2 2M
z -1 W2M
d0
ak [1]
M P (1) (-z 2M
)
T T
2 2M -1/2
d1 z -1 W2M
ak [M -1]
M IDFT
T T
2 2M
dM -1
*
dM -1
-1/2
d 1* z -1 W2M
( 2M -1) 2M
P (-z )
d 0*
Figure 9.27. OFDM system with approximate suppression of the ICI: transmit filter bank.
while JM denotes the M ð M matrix that has elements equal to one on the antidiagonal
and all other elements equal to zero:
2 3
0 ÐÐÐ 0 0 1
6 0 ÐÐÐ 0 1 0 7
6 7
JM D 6 : : : :: 7 (9.111)
4 :: :: :: : 5
1 0 0 ÐÐÐ 0
We assume that the parameters and M that determine the length M of the prototype
filter are even numbers. The element tin of matrix T is then given by
n n
tin D W2M
2 W ni d C W 2 W n.2M1i / d Ł
2M i 2M 2M i
½
³ 1 M 1 ³ (9.112)
D 2 cos iC n C .1/i
M 2 2 4
Figure 9.28. OFDM system with approximate suppression of the ICI: receive filter bank.
where
³ 1 1
cOin D cos iC nC
M 2 2
³ 1 1
sOin D sin iC nC (9.113)
M 2 2
1 ³
i D ³ i C C .1/i
2 2 4
Therefore using the matrix TT we obtain an equivalent block diagram to that of
Figure 9.27, as illustrated in Figure 9.29.
We give the following definitions:
1. C is the matrix of the M-point discrete cosine transform (DCT), whose element in
the position i; n is given by
"r #
2 ³ 1 1
[C]i;n D cos iC nC
M M 2 2 (9.114)
i D 0; : : : ; M 1 n D 0; : : : ; M 1
9.9. DWMT modulation 791
2. S is the matrix of the M-point discrete sine transform (DST), whose element in the
position i; n is given by
"r #
2 ³ 1 1
[S]i;n D sin iC nC (9.115)
M M 2 2
and
1
[s ]ii D sin ³ i C (9.117)
2 2
respectively;
4.
2 3
1 0 0 ÐÐÐ 0
6 0 1 0 ÐÐÐ 0 7
6 7
M D 6 :: :: :: :: 7 (9.118)
4 : : : : 5
0 0 0 ÐÐÐ .1/M1
5.
T D [A0 ; A1 ] (9.119)
792 Chapter 9. Orthogonal frequency division multiplexing
We recall that each polyphase component P .`/ .z/ has length =2. Moreover, from (9.90),
the relation pn [0] D p M1n [0] implies the following constraints on the polyphase
components:
½Ł
1
P .`/ .z/ D z 2 P .2M1`/ Ł
1
` D 0; : : : ; 2M 1 (9.126)
z
From property (9.126), we find that the diagonal matrices p0 .z/ and p 1 .z/ satisfy the relation
2 1 1 1
p1 .z/ D z .1/ 2 Ł
JM p0 Ł JM (9.127)
z
Then, using (9.127), we find that the second term in the (9.125) vanishes. Therefore we
obtain
1 1 1
E.z/ E H
D 2M p0 .z / p0 Ł 2 C p1 .z / p1 Ł 2
2 Ł 2 Ł
(9.128)
zŁ z z
Recalling that for two matrices whose product is the identity matrix the commutative
property holds, we get E H .1=z Ł / E.z/ D E.z/ E H .1=z Ł / D I if and only if
1 1 1
p0 .z/ p Ł0 Ł C p1 .z/ p Ł1 Ł D I (9.129)
z z 2M
Using (9.129), we find the conditions on the polyphase components of the prototype
filter for perfect suppression of ISI and ICI, given by
½Ł ½Ł
1 .MC`/ 1 1
P .`/ .`/
P .z/ C P P .MC`/ .z/ D 0`M1
zŁ zŁ 2M
(9.130)
The conditions (9.130) can be used for the design of filters for DWMT systems. An
efficient filter bank implementation is obtained by the DCT [1].
Bibliography
[1] P. P. Vaidyanathan, Multirate systems and filter banks. Englewood Cliffs, NJ: Prentice-
Hall, 1993.
[2] M. G. Bellanger, G. Bonnerot, and M. Coudreuse, “Digital filtering by polyphase net-
work: application to sample-rate alteration and filter banks”, IEEE Trans. on Acoustics,
Speech and Signal Processing, vol. ASSP-24, pp. 109–114, Apr. 1976.
[3] G. Cherubini, E. Eleftheriou, S. Ölçer, and J. M. Cioffi, “Filter bank modulation tech-
niques for very high-speed digital subscriber lines”, IEEE Communications Magazine,
vol. 38, pp. 98–104, May 2000.
794 Chapter 9. Orthogonal frequency division multiplexing
R
[4] G. Cherubini, E. Eleftheriou, and S. Olcer, “Filtered multitone modulation for very-
high-speed digital subscriber lines”, IEEE Journal on Selected Areas in Communica-
tions, June 2002.
[5] S. B. Weinstein and P. M. Ebert, “Data transmission by frequency-division multiplex-
ing using the discrete Fourier transform”, IEEE Trans. on Communications, vol. 19,
pp. 628–634, Oct. 1971.
[6] J. A. C. Bingham, “Multicarrier modulation for data transmission: an idea whose time
has come”, IEEE Communications Magazine, vol. 28, pp. 5–14, May 1990.
[7] H. Sari, G. Karam, and I. Jeanclaude, “Transmission techniques for digital terrestrial
TV broadcasting”, IEEE Communications Magazine, vol. 33, pp. 100–109, Feb. 1995.
[8] J. S. Chow, J. C. Tu, and J. M. Cioffi, “A discrete multitone transceiver system
for HDSL applications”, IEEE Journal on Selected Areas in Communications, vol. 9,
pp. 895–908, Aug. 1991.
[9] S. D. Sandberg and M. A. Tzannes, “Overlapped discrete multitone modulation for
high speed copper wire communications”, IEEE Journal on Selected Areas in Com-
munications, vol. 13, pp. 1571–1585, Dec. 1995.
[10] P. J. W. Melsa, R. C. Younce, and C. E. Rohrs, “Impulse response shortening for
discrete multitone transceivers”, IEEE Trans. on Communications, vol. 44, pp. 1662–
1672, Dec. 1996.
[11] R. Baldemair and P. Frenger, “A time-domain equalizer minimizing intersymbol and
intercarrier interference in DMT systems”, in Proc. GLOBECOM ’01, San Antonio,
TX, Nov. 2001.
[12] T. Pollet and M. Peeters, “Synchronization with DMT modulation”, IEEE Communi-
cations Magazine, vol. 37, pp. 80–86, Apr. 1999.
[13] J. M. Cioffi, G. P. Dudevoir, M. V. Eyuboglu, and G. D. Forney, Jr., “MMSE decision-
feedback equalizers and coding. Part I and Part II”, IEEE Trans. on Communications,
vol. 43, pp. 2582–2604, Oct. 1995.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 10
The term spread spectrum systems [1, 2, 3, 4, 5, 6, 7] was coined to indicate communication
systems in which the bandwidth of the signal obtained by a standard modulation method
(see Chapter 6) is spread by a certain factor before transmission over the channel, and then
despread, by the same factor, at the receiver. The operations of spreading and despreading
are the inverse of each other, i.e. for an ideal and noiseless channel the received signal
after despreading is equivalent to the transmitted signal before spreading. For transmission
over an ideal AWGN channel these operations do not offer therefore any improvement in
performance with respect to a system that does not use spread spectrum. However, the
practical applications of spread spectrum systems are numerous, for example, in multiple-
access systems, narrowband interference rejection, and transmission over channels with
fading (see Section 10.2).
1) Bit-mapper. From the sequence of information bits, a sequence of i.i.d. symbols fak.u/ g
with statistical power Ma is produced. The symbols assume values in an M-ary constellation,
using one of the maps described in Chapter 6; typically, in mobile radio systems, BPSK or
QPSK modulation is used. Let T be the symbol period.
2) Spreading. We indicate by the integer N S F the spreading factor, and by Tchi p the chip
period. These two parameters are related to the symbol period T by the relation
T
Tchi p D (10.1)
NSF
796 Chapter 10. Spread spectrum systems
Figure 10.1. Baseband equivalent model of a DS system: (a) transmitter, (b) multiuser channel.
We recall from Appendix 6.D the definition of the Walsh–Hadamard sequences of length
.u/
N S F . Here we refer to these sequences as channelization code fcCh;m g, m D 0; 1; : : : ;
.u/ .u/
NSF 1, u 2 f1; : : : ; U g. Moreover, cCh;m 2 f1; 1g, and jcCh;m j D 1. The Walsh–
Hadamard sequences are orthogonal, that is
(
1 NX SF 1
.u 1 / .u 2 /Ł 1 if u 1 D u 2
c c D (10.2)
NSF mD0 Ch;m Ch;m 0 if u 1 6D u 2
.u/
We assume that the sequences of the channelization code are periodic, that is cCh;m D
.u/
cCh;m mod N
.
SF
.u/
We now introduce the user code fcm g, also called signature sequence or spreading
sequence, that we initially assume to be equal to the channelization code,
.u/ .u/
cm D cCh;m (10.3)
.u/
Consequently, fcm g is also a periodic sequence of period NSF .
The operation of spreading consists of associating with each symbol ak.u/ a sequence of
NSF symbols of period Tchi p , that is obtained as follows. First, each symbol ak.u/ is repeated
NSF times with period Tchi p : as illustrated in Figure 10.2, this operation is equivalent to
upsampling fak.u/ g, so that .NSF 1/ zeros are inserted between two consecutive symbols,
10.1. Spread spectrum techniques 797
a (u)
k am(u) (u)
dm
NSF holder
T Tchip
(u)
cm
(a)
a (u)
k
(u)
dm
g (u)
sp
T Tchip
(b)
and using a holder of NSF values. The obtained sequence is then multiplied by the user
code. Formally we have
.u/
aN m D ak.u/ m D k NSF ; : : : ; k NSF C NSF 1
(10.4)
dm.u/ D aN m
.u/ .u/
cm
the correlation of Figure 10.2a can be substituted by an interpolation with the interpolator
.u/
filter gsp , as illustrated in Figure 10.2b.
.u/
Recalling that jcm j D 1, from (10.4) we get
Md D Ma (10.6)
3) Pulse-shaping. Let h Tx be the modulation pulse, typically a square root raised cosine
function or rectangular window. The baseband equivalent of the transmitted signal of user
u is expressed as
X
C1
s .u/ .t/ D A.u/ dm.u/ h Tx .t m Tchi p / (10.7)
mD1
where A.u/ accounts for the transmit signal power. In fact, if E h is the energy of h Tx and
fdm.u/ g is assumed i.i.d., the average statistical power of s .u/ .t/ is given by (see (1.399))
Eh
MN s .u/ D .A.u/ /2 Md (10.8)
Tchi p
798 Chapter 10. Spread spectrum systems
and
X
C1
s .u/ .t/ D A.u/ ak.u/ h .u/
T .t kT / (10.11)
kD1
As shown in Figure 10.3, the equivalent scheme to the cascade of spreader and pulse-shaping
filter is still a QAM modulator. The peculiarity is that the filter h .u/
T has a bandwidth much
larger than the Nyquist frequency 1=.2T /. Therefore a DS system can be interpreted as a
QAM system either with input symbols fdm.u/ g and transmit pulse h Tx , or with input symbols
fak.u/ g and pulse h .u/
T ; later both interpretations will be used.
(u)
a (u) A s (u)(t)
k
h (u)
T
T
5) Noise. In Figure 10.1b the term wC includes both the noise of the receiver and possible
additional interference, such as the interference due to signals coming from other cells in
a wireless system; wC is modeled as white noise with PSD equal to N0 .
Two signal-to-noise ratios are of interest. To measure the performance of the system in
terms of Pbit , it is convenient to refer to the signal-to-noise ratio defined in Chapter 6 for
passband transmission,
Ms .1/
0s D C
(10.15)
N0 =T
We recall that for an uncoded sequence of symbols fak.1/ g, the following relation holds:
Eb 0s
D (10.16)
N0 log2 M
However, there are cases, as for example in the evaluation of the performance of the channel
impulse response estimation algorithm, when it is useful to measure the power of the noise
over the whole transmission bandwidth. Hence we define the ratio
Ms .1/ 0s
0c D C
D (10.17)
N0 =Tchip NSF
6) Receiver. The receiver structure varies according to the channel model and number of
users. Deferring until Section 10.3 the analysis of more complicated system configurations,
here we limit ourselves to considering the case of an ideal AWGN channel with gC.u/ .t/ D
Ž.t/ and synchronous users. The latter assumption implies that the transmitters of the various
users are synchronized and transmit at the same instant. For an ideal AWGN channel this
means that at the receiver the optimum timing phase of signals of different users is the
same.
With these assumptions, we verify that the optimum receiver is simply given by the
.1/
matched filter to h T .t/. According to the analog or discrete-time implementation of the
matched filters, we get the schemes of Figure 10.4 or Figure 10.5, respectively; note that in
Figure 10.5 the receiver front-end comprises an anti-aliasing filter followed by a sampler
with sampling period Tc D Tchip =2. Let t0 be the optimum timing phase at the matched
filter output (see Section 14.7).
For an ideal AWGN channel it results in
X
U
rC .t/ D s .u/ .t/ C wC .t/
uD1
(10.18)
X
U X
C1 NX
SF 1
D A.u/ ai.u/ c`.u/ h Tx .t .` C i NSF / Tchip / C wC .t/
uD1 i D1 `D0
In the presence only of the desired user, that is for U D 1, it is clear that in the absence of
ISI the structure with the matched filter to h .1/
T .t/ is optimum. We verify that the presence
of other users is cancelled at the receiver, given that the various user codes are orthogonal.
800 Chapter 10. Spread spectrum systems
g (t) =h (1)*(−t)
(1)
M T
(1)
r (t)
C (1) yk a^ k
g
M
T
(a)
(1)*
cm
(b)
Figure 10.4. Optimum receiver with analog filters for a DS-CDMA system with ideal
AWGN channel and synchronous users. Two equivalent structures: (a) overall matched
filter, (b) matched filter to hTx and despreading correlator.
g (t) =h (1)*(−t)
(1)
t 0’ +nTc M T
(1)
rC (t) r
AA (t) (1) yk a^ k
g g 2NSF
AA M
Tchip T T
Tc =
2 a)
g (t) =h * (−t)
M Tx
xm yk
g 2 g (1) N SF
M ds
Tchip Tchip b)
2
xm kNSF +NSF −1 yk
Tchip
g
M 2
Tchip
Σ xm c m
m=kNSF
(1)*
c)
2
c m(1)*
despreading
Figure 10.5. Optimum receiver with discrete-time filters for a DS-CDMA system with ideal
AWGN channel and synchronous users. Three equivalent structures: (a) overall matched filter,
(b) matched filter of hTx and despreading filter, (c) matched filter to hTx and despreading
correlator.
10.1. Spread spectrum techniques 801
We assume that the overall analog impulse response of the system is a Nyquist pulse,
hence
.h Tx Ł gC.u/ Ł g A A Ł g M /.t/jtDt0 C j Tchip D .h T x Ł g M /.t/jtDt0 C j Tchi p D E h Ž j (10.19)
We note that, if t0 is the instant at which the peak of the overall pulse at the output of g M
is observed, then t00 in Figure 10.5 is given by t00 D t0 tg M , where tg M is the duration
of g M . Moreover, from (10.19), we get that the noise at the output of g M , sampled with
sampling rate 1=Tchip ,
wQ m D .wC Ł g M /.t/jtDt0 Cm Tchip (10.20)
is an i.i.d. sequence with variance N0 E h .
Hence, from (10.9) and (10.19), the signal at the output of g M , sampled with sampling
rate 1=Tchip , has the following expression:
X
U X
C1 NX
SF 1
xm D E h A.u/ ai.u/ .u/
c`Ci NSF Žm`i NSF C w
Qm (10.21)
uD1 i D1 `D0
.u/
As usual, introducing the filter gds given by
.u/
gds .i Tchip / D c.1/Ł
NSF 1i i D 0; 1; : : : ; NSF 1 (10.24)
the correlation (10.23) is implemented through the filter (10.24), followed by a downsam-
pler, as illustrated in Figure 10.5b.
Substitution of (10.22) in (10.23) yields
X
U
yk D NSF E h A.u/ ak.u/ rc.u/ c.1/ .0/ C wk (10.25)
uD1
where in general
NSF 1jn
X Dj
1 .u / .u /Ł
rc.u 1 / c.u 2 / .n D / D 1
c jCk 2
NSF Cn D c jCk NSF
NSF jn D j jD0 (10.26)
In the considered case, from (10.3) and (10.2) we get rc.u 1 / c.u 2 / .0/ D Žu 1 u 2 . Therefore
(10.25) simply becomes
where N S F E h is the energy of the pulse associated with ak.1/ (see (10.10)).
In (10.27) the noise is given by
NX
SF 1
wk D wQ jCk NSF c.1/Ł
jCk NSF (10.28)
jD0
8) Data detector. Using a threshold detector, from (10.27) the signal-to-noise ratio at the
decision point is given by (see (7.106))
2
dmin .NSF E h A.1/ /2 NSF E h .A.1/ /2
D D D (10.30)
2¦ I NSF N0 E h =2 N0 =2
9) Multi-user receiver. The derivation of the optimum receiver carried out for user 1
can be repeated for each user. Therefore we obtain the multiuser receiver of Figure 10.6,
composed of a matched filter to the transmit pulse and a despreader bank, where each
branch employs a distinct user code.
We observe that for the ideal AWGN channel case, spreading the bandwidth of the
transmit signal by a factor U allows the simultaneous transmission of U messages using
the same frequency band.
(1)
a^ k
g (1)
ds
T
(2)
a^ k
g (2)
g (t) =h * (−t) ds
T
M Tx
r (t)
C
g
M
Tchip
(U)
(U) a^ k
g ds
T
Figure 10.6. Multiuser receiver for a CDMA synchronous system with an ideal AWGN channel.
From the point of view of each mobile station, all U users share the same channel. There-
fore, although the channel impulse response depends on the site of the mobile, we have
and the residual interference is due to signals originating from adjacent cells in addition
to the multipath interference introduced by the channel. In general, interference due to
the other users within the same cell is called multi-user interference (MUI) or co-channel
interference (CCI).
Asynchronous systems. In this case the various user signals are not time-aligned. In a
wireless system, this situation typically occurs in the reverse or uplink transmission from
the mobile stations to the base station.
Because the Walsh–Hadamard codes do not exhibit good cross-correlation properties for
lags different from zero, PN scrambling sequences are used (see Appendix 3.A). The user
code is then given by
.u/ .u/
cm D cCh;m cscr;m (10.33)
1 This observation must not be confused with the distinction between the use of short (of period ' 215 ) or long
(of period ' 242 ) PN scrambling sequences, which are employed to identify the base stations or the users and
to synchronize the system [8].
804 Chapter 10. Spread spectrum systems
Asynchronous systems are characterized by codes with low cross-correlation for non-
zero lags; however, there is always a residual non-zero correlation among the various user
signals. Especially in the presence of multipath channels, the residual correlation is the
major cause of interference in the system, which now originates from signals within the
cell: for this reason the MUI is usually characterized as intracell MUI.
Synchronization
The despreading operation requires that the receiver is capable of reproducing a user code
sequence synchronous with that used for spreading. Therefore the receiver must first per-
.u/
form acquisition, that is the code sequence fcm g produced by the local generator must
be synchronized with the code sequence of the desired user, so that the error in the time
alignment between the two sequences is less than one chip interval.
As described in Section 14.7, acquisition of the desired user code sequence is generally
obtained by a sequential searching algorithm that, at each step, delays the local code gener-
ator by a fraction of a chip, typically half a chip, and determines the correlation between the
.u/
signals fxm g and fcm g; the search terminates when the correlation level exceeds a certain
threshold value, indicating that the desired time alignment is attained. Following the acqui-
sition process, a tracking algorithm is used to achieve, in the steady state, a time alignment
.u/
between the signals fxm g and fcm g that has the desired accuracy; the more commonly used
tracking algorithms are the delay-locked loop and the tau-dither loop. The synchronization
method also suggests the use of PN sequences as user code sequences. In practice, the chip
frequency is limited to values of the order of hundreds of Mchip/s because of the difficulty
in obtaining an accuracy of the order of a fraction of a nanosecond in the synchronization
of the code generator. In turn, this determines the limit in the bandwidth of a DS signal.
X
C1
c F H .t/ D e j .2³ f 0;i tC'0;i / wThop .t i Thop / (10.34)
i D1
where f f 0;i g is a pseudorandom sequence that determines shifts in frequency of the FH/M-
FSK signal, f'0;i g is a sequence of random phases associated with the sequence of frequency
shifts, and wThop is a rectangular window of duration equal to a hop interval Thop . In an
FH/M-FSK system, the transmitted signal is then given by
In practice, the signal cFH .t/ is not generated at the transmitter; the transmitted signal
s.t/ is obtained by applying the sequence of pseudorandom frequency shifts f f 0;i g directly
to the frequency synthesizer that generates the carrier at frequency f 0 . With reference to
the implementation illustrated in Figure 10.7, segments of L consecutive chips from a PN
sequence, not necessarily disjoint, are applied to a frequency synthesizer that makes the
carrier frequency hop over a set of 2 L frequencies. As the band over which the synthesizer
must operate is large, it is difficult to maintain the carrier phase coherent between two
consecutive hops [9]; if the synthesizer is not equipped with any device to maintain a
coherent phase, it is necessary to include a random phase '0;i as in the expression (10.34).
In a time interval that is long with respect to Thop , the bandwidth of the signal s.t/, BSS , can
be in practice of the order of several GHz. However, in a short time interval during which
no frequency hopping occurs, the bandwidth of an FH/M-FSK signal is the same as the
bandwidth of the M-FSK signal that carries the information, usually much lower than BSS .
Despreading, in this case also called dehopping, is ideally carried out by multiplying
the received signal r.t/ by a signal cOFH .t/ equal to that used for spreading, apart from
the sequence of random phases associated with the frequency shifts. For non-coherent
demodulation, the sequence of random phases can be modelled as a sequence of i.i.d.
random variables with uniform probability density in [0; 2³ /. The operation of despreading
yields the signal x.t/, given by the sum of the M-FSK signal, the noise and possibly
interference. The signal x.t/ is then filtered by a lowpass filter and presented to the input
of the receive section comprising a non-coherent demodulator for M-FSK signals. As in the
case of DS systems, the receiver must perform acquisition and tracking of the FH signal,
so that the waveform generated by the synthesizer for dehopping reproduces as accurately
as possible the signal cFH .t/.
806 Chapter 10. Spread spectrum systems
Classification of FH systems
FH systems are traditionally classified according to the relation between Thop and T . Fast
frequency-hopped (FFH) systems are characterized by one or more frequency hops per
symbol interval, that is T D N Thop , N integer, and slow frequency-hopped (SFH) sys-
tems are characterized by the transmission of several symbols per hop interval, that is
Thop D N T .
Moreover, a chip frequency Fchip is defined also for FH systems, and is given by the
largest value among Fhop D 1=Thop and F D 1=T . Therefore the chip frequency Fchip
corresponds to the highest among the clock frequencies used by the system. The frequency
spacing between the tones of an FH/M-FSK signal is related to the chip frequency and is
therefore determined differently for FFH and SFH systems.
SFH systems. For SFH systems, Fchip D F, and the spacing between FH/M-FSK tones
is equal to the spacing between the M-FSK tones themselves. In a system that uses a
non-coherent receiver for M-FSK signals, orthogonality of tones corresponding to M-FSK
symbols is obtained if the frequency spacing is an integer multiple of 1=T . Assuming the
minimum spacing is equal to F, the bandwidth BSS of an FH/M-FSK signal is partitioned
into N f D BSS =F D BSS =Fchip sub-bands with equally spaced center frequencies; in the
most commonly used FH scheme the N f tones are grouped into Nb D N f =M adjacent
bands without overlap in frequency, each one having a bandwidth equal to M F D M Fchip ,
as illustrated in Figure 10.8. Assuming M-FSK modulation symmetric around the carrier
frequency, the center frequencies of the Nb D 2 L bands represent the set of carrier fre-
quencies generated by the synthesizer, each associated with an L-uple of binary symbols.
According to this scheme, each of the N f tones of the FH/M-FSK signal corresponds to a
unique combination of carrier frequency and M-FSK symbol.
BSS
MF MF MF MF MF
1 2 i Nb
frequency
Figure 10.8. Frequency distribution for an FH/4-FSK system with bands non-overlapping in
frequency; the dashed lines indicate the carrier frequencies.
10.2. Applications of spread spectrum systems 807
BSS
MF
MF
MF MF
MF MF MF
frequency
Figure 10.9. Frequency distribution for an FH/4-FSK system with bands overlapping in
frequency.
In a different scheme, that yields a better protection against an intentional jammer using
a sophisticated disturbance strategy, adjacent bands exhibit an overlap in frequency equal
to .M 1/Fchip Hz, as illustrated in Figure 10.9. Assuming that the center frequency of
each band corresponds to a possible carrier frequency, as all N f tones except .M 1/ are
available as center frequencies, the number of carrier frequencies increases from N f =M to
N f .M 1/, which for N f × M represents an increase by a factor M of the randomness
in the choice of the carrier frequency.
FFH systems. For FFH systems, where Fchip D Fhop , the spacing between tones of an
FH/M-FSK signal is equal to the hop frequency. Therefore the bandwidth of the spread
spectrum signal is partitioned into a total of N f D BSS =Fhop D BSS =Fchip sub-bands with
equally spaced center frequencies, each corresponding to a unique L-uple of binary sym-
bols. Because there are Fhop =F hops per symbol, the metric used to decide upon the symbol
with a non-coherent receiver is suitably obtained by summing Fhop =F components of the
received signal.
3. Robustness against fading. Widening the signal bandwidth allows exploitation of the
multipath diversity of a radio channel affected by fading. Applying a DS spread
spectrum technique, intuitively, has the effect of modifying a channel model that
is adequate for transmission of narrowband signals in the presence of flat fading or
multipath fading with a few rays, to a channel model with many rays. Using a receiver
that combines the desired signal from the different propagation rays, the power of the
desired signal at the decision point increases. In an FH system, on the other hand,
we obtain diversity in the time domain, as the channel changes from one hop interval
to the next. The probability that the signal is affected by strong fading during two
consecutive hop intervals is usually low. To recover the transmitted message in a hop
interval during which strong fading is experienced, error correction codes with very
long interleaver and ARQ schemes are used (see Chapter 11).
Figure 10.10. Power spectral density of an M-QAM signal with minimum bandwidth and of a
spread spectrum M-QAM signal with spreading factor NSF D 4.
In practice, performance is usually limited by interference and the presence of white noise
can be ignored. Therefore, assuming Nj × N0 , (10.36) becomes
1 Es Ms =F Ms BSS
Ma ' D D (10.37)
2 Nj Mj =BSS Mj F
where Ms =Mj is the ratio between the power of the desired signal and the power of the
jammer, and BSS =F is the spreading ratio N S F also defined as the processing gain of the
system.
The above considerations are now defined more precisely in the following case.
where s.t/ is a DS signal given by (10.9) with amplitude A.u/ D 1, wC .t/ is AWGN with
spectral density N0 , and the interferer is given by
Modeling the sequence fckŁNSF ; ckŁNSF C1 ; : : : ; ckŁNSF CNSF 1 g as a sequence of i.i.d. random
variables, the variance of the summation in (10.40) is equal to NSF , and the ratio
810 Chapter 10. Spread spectrum systems
is given by
.NSF E h /2
D (10.41)
.NSF N0 E h C Mj E h Tchip NSF /=2
We note that in the denominator of (10.42) the ratio Mj =Ms is divided by N S F . Recognizing
that Mj =Ms is the ratio between the power of the jammer and the power of the desired
signal before the despreading operation, and that Mj =.NSF Ms / is the same ratio after the
despreading, we find that, by analogy with the previous case of narrowband interference,
also in the case of a sinusoidal jammer the use of the DS technique reduces the effect of
the jammer by a factor equal to the processing gain.
Before introducing a structure that is often employed in receivers for DS spread spectrum
signals, we make the following considerations on the radio channel model introduced in
Section 4.6.
Nc;1
X1
gC .− / D gi Ž.− i TQ / (10.43)
i D0
where for simplicity we have assumed the absence of Doppler spread. Therefore the non-
zero gains fgi g are uncorrelated random variables and the delays −i D i TQ are multiples of
a sufficiently small period TQ .
812 Chapter 10. Spread spectrum systems
Hence, from (10.43), the channel output signal sC is related to the input signal s by
Nc;1
X1
sC .t/ D gi s.t i TQ / (10.44)
i D0
Now the number of resolvable or uncorrelated rays in (10.44) is generally less than Nc;1
and is related to the bandwidth of s by the following rule: if s has a bandwidth B, the
uncorrelated rays are spaced by a delay of the order of 1=B. Consequently, for a channel
with a delay spread −r ms and bandwidth B / 1=Tchip , the number of resolvable rays is
given by
−r ms
Nc;r es / (10.45)
Tchip
Using the notion of channel coherence bandwidth, Bccb / 1=−r ms , (10.45) may be re-
written as
B
Nc;r es / (10.46)
Bccb
We now give an example that illustrates the above considerations. Let fgC .nTQ /g be a
realization of the channel impulse response with uncorrelated coefficients having a given
power delay profile; the “infinite bandwidth” of the channel will be equal to B D 1=.2TQ /.
We now filter fgC .nTQ /g with two filters having, respectively, bandwidth B D 0:1=.2TQ /
and B D 0:01=.2TQ /, and we compare the three pulse shapes given by the input sequence
and the two output sequences. We note that the output obtained in correspondence of the
filter with the narrower bandwidth has fewer resolvable rays. In fact, in the limit for B ! 0
the output is modeled as a single random variable.
Another way to derive (10.45) is to observe that, for t within an interval of duration
1=B, s does not vary much. Therefore, letting
Nc;1
Ncor D (10.47)
Nc;r es
equation (10.44) can be written as
Nc;r
X es 1
sC .t/ D gr es; j s.t j Ncor TQ / (10.48)
jD0
where
NX
cor 1
gr es; j ' gi C j Ncor (10.49)
i D0
and let g M .t/ D qCŁ .t0 t/ be the corresponding matched filter. In practice, at the output
of the filter gAA an estimate of qC with sampling period Tc D Tchip =2 is evaluated,2 which
yields the corresponding discrete-time matched filter with sampling period of the input sig-
nal equal to Tc and sampling period of the output signal equal to Tchip (see Figure 10.11).
If qC is sparse, that is, it has a large support but only a few non-zero coefficients, for
the realization of g M we retain only the coefficients of qC with larger amplitude; it is better
to set to zero the remaining coefficients because their estimate is usually very noisy (see
Appendix 3.A).
Figure 10.12a illustrates in detail the receiver of Figure 10.11 for a filter g M with at most
NMF coefficients spaced of Tc D Tchip =2. If we now implement the despreader on every
branch of the filter g M , we obtain the structure of Figure 10.12b. We observe that typically
only 3 or 4 branches are active, that is they have a coefficient g M;i different from zero.
Ideally, for an overall channel with Nr es resolvable paths, we assume
Nr es
X
qC .t/ D qC;i Ž.t −i / (10.51)
i D1
hence
Nr es
X
g M .t/ D Ł
qC; j Ž.t0 t − j / (10.52)
jD1
Defining
t M; j D t0 − j j D 1; : : : ; Nr es (10.53)
the receiver scheme, analogous to that of Figure 10.12b, is illustrated in Figure 10.13.
2 To determine the optimum sampling phase t0 , usually r A A is oversampled with a period TQ such that
Tc =TQ D 2 or 4 for Tc D Tchip =2; among the 2 or 4 estimates of gC obtained with sampling period Tc ,
the one with the largest energy is selected (see Observation 8.5 on page 641).
814 Chapter 10. Spread spectrum systems
r
AA,2m
Tc Tc Tc
Tchip
Tc =
2 g g g g
M,0 M,1 M,NSF 2 M,NMF1
xm yk
2 g (1) ^a (1)
ds k
Tchip T
(a)
r
AA,2NSF k
Tc Tc
T
Tc = chip
2
2 2
Tchip Tchip
T T
g g
M,0 M,NMF 1
yk
^a (1)
k
T
(b)
Figure 10.12. Two receiver structures: (a) chip matched filter with despreader, (b) rake.
To simplify the analysis, we assume that the spreading sequence is a PN sequence with
N S F sufficiently large, such that the following approximations hold: 1) the autocorrela-
tion of the spreading sequence is a Kronecker delta: and 2) the delays f−i g are multiples
of Tchip .
From (10.51), in the absence of noise the signal r A A is given by
Nr es
X X
C1 NX
SF 1
r A A .t/ D qC;n ai.1/ .1/
c`Ci NSF Ž.t −n .` C i NSF / Tchip / (10.54)
nD1 i D1 `D0
10.3. Chip matched filter and rake receiver 815
despreader
Figure 10.13. Rake receiver for a channel with Nres resolvable paths.
3 Instead of using the Dirac delta in (10.51), a similar analysis assumes that 1) gAA .t/ D h ŁTx .t/, and 2) rh Tx .t/
is a Nyquist pulse. The result is the same as (10.55).
816 Chapter 10. Spread spectrum systems
coefficients of rays with larger gain. The delays and the coefficients are updated whenever
a change in the channel impulse response is observed. However, after the initialization has
taken place, on each finger of the rake the estimates of the amplitude and of the delay of
the corresponding ray may be refined by using the correlator of the despreader, as indicated
by the dotted line in Figure 10.13. We note that if the channel is static, the structure of
Figure 10.12a with Tc D Tchip =2 yields a sufficient statistic.
10.4 Interference
For a dispersive channel and in the case of U users, we evaluate the expression of the
signal yk at the decision point using the matched filter receiver of Figure 10.11.
Similarly to (10.50), we define
and let
g .v/ .v/Ł
M .t/ D qC .t0 t/ v D 1; : : : ; U (10.58)
be the corresponding matched filter. Moreover, we introduce the correlation between qC.u/
and qC.v/ , expressed by
Assuming without loss of generality that the desired user signal has the index u D 1, we
refer to the receiver of Figure 10.11, where we have g M .t/ D g .1/ .1/Ł
M .t/ D qC .t0 t/, and
X
U X
C1 NX
SF 1
xm D A.u/ ai.u/ .u/
c`Ci NSF rq .u/ q .1/ ..m ` i NSF / Tchip / C w
Qm (10.60)
C C
uD1 i D1 `D0
NX
SF 1
yk D x jCk NSF c.1/Ł
jCk NSF C wk
jD0
X
U X
C1 NX
SF 1 NX
SF 1
(10.61)
D A.u/ ai.u/ .u/
c`Ci NSF
uD1 i D1 `D0 jD0
where, to simplify the notation, we have assumed that the user code sequences are periodic
of period NSF .
The desired term in (10.61) is obtained for u D 1; as rŁc.1/ .n/ D rc.1/ .n/, it has the
following expression:
X
C1 NX
SF 1
A.1/ ai.1/ .NSF jnj/ rc.1/ .n/ rq .1/ ..n C .k i/ NSF / Tchip / (10.63)
C
i D1 nD.NSF 1/
where E q .1/ is the energy per chip of the overall pulse at the output of the filter g A A , then
C
the desired term (10.63) becomes
which coincides with the case of an ideal AWGN channel (see (10.27)). Note that using
the same assumptions we find the rake receiver behaves as an MRC (see (10.56)).
If (10.64) is not verified, as happens in practice, and if
the terms for n 6D 0 in (10.63) give rise to intersymbol interference, in this context also
called inter-path interference (IPI). Usually the smaller the NSF , the larger the IPI. We note,
however, that if the overall pulse at the output of the CMF is a Nyquist pulse, that is
x m(1)
g (1) detector ^a (1)
M Tchip k
rC (t) a)
x m(U)
g (U) ^a (U)
M
detector k
Tchip
x m(1)
g (1) ^a (1)
M Tchip multi- k
user
rC (t) b)
detector
x m(U) ^a (U)
g (U)
M k
Tchip
wC (t)
t ’ +nTc
0 ~ (1)
(1)
dm A
(1)
dm yk a^ k−D
h Tx *g (1) g g
CE 2 g (1)
C AA ds
Tchip Tchip Tchip T T
Tc =
2
(U) (U)
dm A
h Tx *g (U)
Tchip C
function is given by
J D E[jdQm dm j2 ] (10.69)
1) All code sequences are known. This is the case that may occur for downlink transmission
in wireless networks. Then gC.u/ .t/ D gC .t/, u D 1; : : : ; U , and we assume
X
U
dm D dm.u/ (10.70)
uD1
that is, for the equalizer design, all user signals are considered as desired signals.
2) Only the code sequence of the desired user signal is known. In this case we need to
assume
dm D dm.1/ (10.71)
The other user signals are considered as white noise, with overall PSD Ni , that is added
to wC .
From the knowledge of qC.1/ and the overall noise PSD, the minimum of the cost func-
tion defined in (10.69) is obtained by following the same steps developed in Chapter 8.
Obviously, if the level of interference is high, the solution corresponding to (10.71) yields
a simple CMF, with low performance whenever the residual interference (MUI and IPI) at
the decision point is high.
A better structure for single-user detection is obtained by the following approach.
wC (t)
t ’ +nTc
0 (1)
a (1) A
(1) rAA,n yk a^ k−D
k
h (1) *g (1) g g (1) 2NSF
T C AA SE
T Tchip T T
Tc =
2
a (U) A
(U)
k (U)
h
T
*g (U)
C
T
.1/
Note that gSE , that includes also the function of despreading, depends on the code sequence
of the desired user. Therefore the length of the code sequence is usually not larger than
NSF , otherwise we would find a different solution for every symbol period, even if gC.1/ is
time invariant. Moreover, in this formulation the other user signals are seen as interference,
and one of the tasks of gSE is to mitigate the MUI.
In an adaptive approach, for example, using the LMS algorithm, the solution is simple
to determine and does not require any particular a priori knowledge, except the training
sequence in fak.1/ g for initial convergence. On the other hand, using a direct approach
we need to identify the autocorrelation of rAA;n and the cross-correlation between rAA;n
.1/
and akD . As usual these correlations are estimated directly or, assuming the messages
fak.u/ g, u D 1; : : : ; U , are i.i.d. and independent of each other, we can determine them
using the knowledge of the various pulses fh .u/ .u/
T g and fgC g, that is the channel impulse
responses and code sequences of all users; for the special case of downlink transmission,
the knowledge of the code sequences is sufficient, as the channel is common to all user
signals.
Here we first consider the simplest among multiuser receivers. It comprises a bank of U
filters gT.u/ , u D 1; : : : ; U , matched to the impulse responses4
NX
SF 1
qT.u/ .t/ D c`.u/ qC.u/ .t ` Tchip / u D 1; : : : ; U (10.73)
`D0
where the functions fqC.u/ .t/g are defined in (10.57). Decisions taken by threshold detec-
tors on the U output signals, sampled at the symbol rate, yield the detected user symbol
sequences. It is useful to introduce this receiver, that we denote as MF, as, substituting the
threshold detectors with more sophisticated detection devices, it represents the first stage
of several multiuser receivers, as illustrated in general in Figure 10.17.
We introduce the following vector notation. The vector of symbols transmitted by U
users in a symbol period T is expressed as
and the vector that carries the information on the codes and the channel impulse responses
of the U users is expressed as
a D [a0T ; : : : ; aTK 1 ]T
(10.76)
.1/ .U / .1/ .U /
D [a0 ; : : : ; a0 ; : : : ; a K 1 ; : : : ; a K 1 ]T
(1)
t 0 +m Tchip
y (1)
k
g (1) (1)
g ds ^a (1)
M multi- k
Tchip T
user
rC (t)
(U)
t 0 +m Tchip detector
y (U)
k ^a (U)
g (U) (U)
g ds
M k
Tchip T
4 .u/
We assume that the information on the power of the user signals is included in the impulse responses gC ,
u D 1; : : : ; U , so that A.u/ D 1, u D 1; : : : ; U .
822 Chapter 10. Spread spectrum systems
where L H is a lower triangular matrix with positive real elements on the main diagonal.
Using (10.76) and (10.79), we find that the vector y satisfies the linear relation
y D Ta Cw (10.81)
Once the expression (10.81) is obtained, the vector a can be detected by well-known
techniques [20].
Applying the zero-forcing criterion, at the decision point we get the vector
z D T1 y
(10.82)
D a C T1 w
Equation (10.82) shows that the zero-forcing criterion completely eliminates both ISI and
MUI, but it may enhance the noise.
Applying instead the MSE criterion to the signal rC .t/ suitably sampled, leads to the
solution (see (2.229))
z D .T C N0 I/1 y (10.83)
10.7. Maximum likelihood multiuser detector 823
X
K 1
D aiT qT .t i T / C wC .t/ (10.85)
i D0
where w.D/ is the noisy term with matrix spectral density N0 Q.D/. Assuming that it does
not have poles on the unit circle, Q.D/ can be factorized in the form
where F0 is a lower triangular matrix. Now let .D/ D [F H .D 1 /]1 , an anticausal filter
by construction. Applying .D/ to y.D/ in (10.92), we obtain
where the noisy term w0 .D/ is a white Gaussian process. Consequently, in the time domain
(10.95) becomes
¹
X
zk D Fm akn C w0k (10.96)
mD0
We note that, as F0 is a lower triangular matrix, the metric has a causal dependence also
with regard to the ordering of the users.
For further study on multiuser detection techniques we refer the reader to [24, 25, 26].
10. Bibliography 825
Bibliography
[2] R. C. Dixon, Spread spectrum systems. New York: John Wiley & Sons, 3rd ed., 1994.
[4] J. G. Proakis, Digital communications. New York: McGraw-Hill, 3rd ed., 1995.
[5] R. Price and P. E. Green, “A communication technique for multipath channels”, IRE
Proceedings, vol. 46, pp. 555–570, Mar. 1958.
[8] “Wideband CDMA”, IEEE Communications Magazine, vol. 36, pp. 46–95, Sept. 1998.
[9] G. Cherubini and L. B. Milstein, “Performance analysis of both hybrid and frequency–
hopped phase–coherent spread–spectrum system. Part I and Part II”, IEEE Trans. on
Communications, vol. 37, pp. 600–622, June 1989.
[10] A. Klein, “Data detection algorithms specially designed for the downlink of CDMA
mobile radio systems”, in Proc. 1997 IEEE Vehicular Technology Conference, Phoenix,
USA, pp. 203–207, May 4–7 1997.
[11] K. Li and H. Liu, “A new blind receiver for downlink DS-CDMA communications”,
IEEE Communications Letters, vol. 3, pp. 193–195, July 1999.
[12] S. Werner and J. Lilleberg, “Downlink channel decorrelation in CDMA systems with
long codes”, in Proc. 1999 IEEE Vehicular Technology Conference, Houston, USA,
pp. 1614–1617, May 16–20 1999.
[13] K. Hooli, M. Latva-aho, and M. Juntti, “Multiple access interference suppression with
linear chip equalizers in WCDMA downlink receivers”, in Proc. 1999 IEEE Global
Telecommunications Conference, Rio de Janeiro, Brazil, pp. 467–471, Dec. 5–9 1999.
[17] A. Klein and P. W. Baier, “Linear unbiased data estimation in mobile radio sys-
tems applying CDMA”, IEEE Journal on Selected Areas in Communications, vol. 11,
pp. 1058–1066, Sept. 1993.
[18] J. Blanz, A. Klein, M. Naßhan, and A. Steil, “Performance of a cellular hybrid
C/TDMA mobile radio system applying joint detection and coherent receiver antenna
diversity”, IEEE Journal on Selected Areas in Communications, vol. 12, pp. 568–579,
May 1994.
[19] G. K. Kaleh, “Channel equalization for block transmission systems”, IEEE Journal
on Selected Areas in Communications, vol. 13, pp. 110–120, Jan. 1995.
[20] A. Klein, G. K. Kaleh, and P. W. Baier, “Zero forcing and minimum mean-square-
error equalization for multiuser detection in code-division multiple-access channels”,
IEEE Trans. on Vehicular Technology, vol. 45, pp. 276–287, May 1996.
[21] N. Benvenuto and G. Sostrato, “Joint detection with low computational complexity
for hybrid TD-CDMA systems”, IEEE Journal on Selected Areas in Communications,
vol. 19, pp. 245–253, Jan. 2001.
[22] G. E. Bottomley and S. Chennakeshu, “Unification of MLSE receivers and extension
to time-varying channels”, IEEE Trans. on Communications, vol. 46, pp. 464–472,
Apr. 1998.
[23] A. Duel-Hallen, “A family of multiuser decision feedback detectors for asynchronous
code-division multiple access channels”, IEEE Trans. on Communications, vol. 43,
pp. 421–434, Feb./Mar./Apr. 1995.
[24] S. Verdù, Multiuser detection. Cambridge: Cambridge University Press, 1998.
[25] “Multiuser detection techniques with application to wired and wireless communica-
tions systems I”, IEEE Journal on Selected Areas in Communications, vol. 19, Aug.
2001.
[26] “Multiuser detection techniques with application to wired and wireless communica-
tions systems II”, IEEE Journal on Selected Areas in Communications, vol. 20, Feb.
2002.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 11
Channel codes
Forward error correction (FEC) is a widely used technique to achieve reliable data trans-
mission. The redundancy introduced by an encoder for the transmission of data in coded
form allows the decoder at the receiver to detect and partially correct errors. An alternative
transmission technique, known as automatic repeat query or request (ARQ), consists in
detecting the errors (usually by a check-sum transmitted with the data, see page 875) and
requesting the retransmission of a data packet whenever it is received with errors.
The FEC technique presents two advantages with respect to the ARQ technique.
1. In systems that make use of the ARQ technique the data packets do not necessarily
have to be retransmitted until they are received without errors; however, for large
values of the error probability, the aggregate traffic of the link is higher.
2. In systems that make use of the FEC technique the receiver does not have to request
the retransmission of data packets, thus making possible the use of a simplex link
(see Section 6.13); this feature represents a strong point in many applications like
TDMA and video satellite links, where a central transmitter broadcasts to receive-
only terminals, which are unable to make a possible retransmission request. The FEC
technique is also particularly useful in various satellite communication applications,
in which the long round-trip delay of the link would cause serious traffic problems
whenever the ARQ technique would be used.
We distinguish two broad classes of FEC techniques, each with numerous subclasses, em-
ploying block codes or convolutional codes.
All error correction techniques add redundancy, in the form of additional bits, to the
information bits that must be transmitted. Redundancy makes the correction of errors
possible and for the classes of codes considered in this chapter represents the coding
overhead. The effectiveness of a coding technique is expressed in terms of the coding
gain, G code , given by the difference between the signal-to-noise ratios, in dB, that are
required to achieve a certain bit error probability for transmission without and with cod-
ing (see Definition 6.2 on page 508). The overhead is expressed in terms of the code
rate, Rc , given by the ratio between the number of information bits and the number of
code bits that are transmitted. The transmission bit rate is inversely proportional to Rc ,
and is larger than that necessary for uncoded data. If one of the modulation techniques
of Chapter 6 is employed, the modulation rate is also larger. In Chapter 12 methods to
828 Chapter 11. Channel codes
transmit coded sequences of symbols without an increase in the modulation rate will be
discussed.
For further study on the topic of error correcting codes we refer to [1, 2, 3].
Observation 11.1
The code rate Rc is related to the encoder-modulator rate R I (6.93) by the following
relation
k0 log2 M log M
RI D D Rc 2 (11.1)
n0 I I
where M is the number of symbols of the I -dimensional constellation adopted by the
bit-mapper.
Because the number of bits per unit of time produced by the encoder is larger than that
produced by the source, two transmission strategies are possible.
Transmission for a given bit rate of the information message. With reference to Figure 6.20,
from the relation
k0 Tb D n 0 Tcod (11.2)
we obtain
1 1 1
D (11.3)
Tcod Rc Tb
1 In this chapter a block code will sometimes be indicated also by the notation .n; k/.
11.1. System model 829
note that the bit rate at the modulator input is increased in the presence of the encoder.
For a given modulator with M symbols, that is using the same bit mapper, this implies an
increase of the modulation rate given by
1 1 1 1
0
D log2 M D (11.4)
T Tcod T Rc
and therefore an increase of the bandwidth of the transmission channel by a factor 1=Rc .
Moreover, for the same transmitted power, from (6.105) in the presence of the encoder the
signal-to-noise ratio becomes
0 0 D 0 Rc (11.5)
k0
L 0b D log2 M D Rc L b (11.6)
n0
Assuming the same transmitted power, from (6.97) and (11.4) we get
E sCh
0 D Rc E s
Ch (11.7)
Therefore (6.99) yields for the encoded message fcm g an energy per bit of information
equal to
E sCh
0 E sCh
D D Eb (11.8)
L 0b Lb
Since 0 0 6D 0, a comparison between the performance of the two systems, with and
without coding, is made for the same E b =N0 . In this case the coding gain, in dB, is
given by
Transmission for a given modulation rate. For given transmitted power and given trans-
mission channel bandwidth, 0 remains unchanged in the presence of the encoder. Therefore,
there are three possibilities.
1. The bit rate of the information message decreases by a factor Rc and becomes
1 1
0 D Rc (11.10)
Tb Tb
830 Chapter 11. Channel codes
2. The source emits information bits in packets and each packet is followed by additional
bits generated by the encoder, forming a code word; the resulting bits are transmitted
at the rate
1 1
D (11.11)
Tcod Tb
3. A block of m information bits is mapped to a transmitted symbol using a constellation
with cardinality M > 2m . In this case transmission occurs without decreasing the bit
rate of the information message.
In the first two cases, for the same number of bits of the information message we have an
increase in the duration of the transmission by a factor 1=Rc .
For a given bit error probability in the sequence fbO` g, we expect that in the presence of
coding a smaller 0 is required to achieve a certain error probability as compared to the
case of transmission of an uncoded message; this reduction corresponds to the coding gain
(see Definition 6.2 on page 508).
Definition 11.1
The Hamming distance between two vectors v1 and v2 , d H .v1 ; v2 /, is given by the number
of elements in which the two vectors differ.
Definition 11.2
The minimum Hamming distance of a block code, to which we will refer in this chapter
H and coincides with the smallest number
simply as the minimum distance, is denoted by dmin
of positions in which any two code words differ.
H D 2 is given by (11.22).
An example of a block code with n D 4; Mc D 4 and dmin
For the binary symmetric channel model (6.90), assuming that the binary code word c
of length n is transmitted, we observe at the receiver3
zDcýe (11.12)
2 The material presented in Sections 11.2 and 11.3 is largely based on lectures given at the University of
California, San Diego, by Professor Jack K. Wolf [4], whom the authors gratefully acknowledge.
3 In Figure 6.20, z is indicated as cQ .
11.2. Block codes 831
where ý denotes the modulo 2 sum of respective vector components; for example .0111/ ý
.0010/ D .0101/. In (11.12), e is the binary error vector whose generic component is equal
to 1 if the channel has introduced an error in the corresponding bit of c, and 0 otherwise.
We note that z can assume all the 2n possible combinations of n bits.
With reference to Figure 6.20, the function of the decoder consists in associating with
each possible value z a code word. A commonly adopted criterion is to associate z with
the code word cO that is closest according to the Hamming distance. From this code
word the k0 information bits, which form the sequence fbOl g, are recovered by inverse
mapping.
Interpreting the code words as points in an n-dimensional space where the distance
between points is given by the Hamming distance, we obtain the following properties.
H can correct all patterns of
1. A binary block code with minimum distance dmin
jdH 1k
min
tD (11.13)
2
or fewer errors, where bxc denotes the integer value of x.
H can detect all patterns of .d H 1/
2. A binary block code with minimum distance dmin min
or fewer errors.
3. In a binary erasure channel, the transmitted binary symbols are detected using a
ternary alphabet f0; 1; erasureg; a symbol is detected as erasure if the reliability of a
binary decision is low. In the absence of errors, a binary block code with minimum
H can fill in .d H 1/ erasures.
distance dmin min
H we find that, for fixed n and odd d H , M
4. Seeking a relation among n, Mc , and dmin min c
is upper bounded by 4
8 9
j <
n n
n = k
Mc MU B D 2n 1C C C Ð Ð Ð C jdH 1k (11.15)
: 1 2 min ;
2
where dxe denotes the smallest integer greater than or equal to x. We will now
consider a procedure for finding such a code.
4 We recall that the number of binary sequences of length n with m ‘ones’ is equal to
n n!
D (11.14)
m m!.n m/!
where n! D n.n 1/ Ð Ð Ð 1.
832 Chapter 11. Channel codes
Step 1: choose any code word of length n and exclude from future choices that word
and all words that differ from it in .dmin
H 1/ or fewer positions. The total number
Step i: choose a word not previously excluded and exclude from future choices all
words previously excluded plus the chosen word and those that differ from it in
.dmin
H 1/ or fewer positions.
Continue this procedure until there are no more words available to choose from. At
each step, if still not excluded, at most Nc .n; dmin
H 1/ additional words are excluded;
therefore after step i, when i code words have been chosen, at most i Nc .n; dmin
H 1/
words have been excluded. Then, if 2 =Nc .n; dmin 1/ is an integer, we can choose
n H
at least that number of code words; if it is not an integer, we can choose at least a
number of code words equal to the next largest integer.
Definition 11.3
A binary code with group structure is a binary block code for which the following conditions
are verified:
1. the all zero word is a code word (zero code word);
2. the modulo 2 sum of any two code words is also a code word.
Definition 11.4
The weight of any binary vector x, denoted as w.x/, is the number of ones in the vector.
H is given by
Property 1 of a group code. The minimum distance of the code dmin
H
dmin D min w.c/ (11.18)
Property 2 of a group code. If all code words in a group code are written as rows of an
Mc ð n matrix, then every column is either zero or consists of half zeros and half ones.
Proof. An all zero column is possible if all code words have a zero in that column. Suppose
in column i there are m 1s and .Mc m/ 0s. Choose one of the words with a 1 in that
column and add it to all words that have a 1 in that column, including the word itself: this
11.2. Block codes 833
operation produces m words with a 0 in that column, hence .Mc m/ ½ m. Now we add
that word to each word that has a 0 in that column: this produces .Mc m/ words with a
1 in that column, hence .Mc m/ m. Therefore Mc m D m or m D Mc =2.
Corollary 11.1
From Property 2 it turns out that the number of code words Mc must be even for a binary
group code.
Corollary 11.2
Excluding codes of no interest from the transmission point of view, for which all code
words have a 0 in a given position, from Property 2 the average weight of a code word is
equal to n=2.
H D [A B] (11.19)
where B is an r ð r matrix with det[B] 6D 0, i.e. the columns of B are linearly independent.
A binary parity check code is a code consisting of all binary vectors c that are solutions
of the equation
Hc D 0 (11.20)
Example 11.2.1
Let the matrix H be given by
" #
1 0 1 1
HD (11.21)
0 1 0 1
There are four code words in the binary parity check code corresponding to the matrix H;
they are
2 3 2 3 2 3 2 3
0 1 0 1
607 607 617 617
c0 D 6
405
7 c1 D 6415
7 c2 D 6
415
7 c3 D 6
405
7 (11.22)
0 0 1 1
H0 D 0 (11.23)
834 Chapter 11. Channel codes
Suppose that c1 and c2 are code words; then Hc1 D 0 and Hc2 D 0. It follows that
Property 2 of a parity check code. The code words corresponding to the parity check
matrix H D [A B] are identical to the code words corresponding to the parity check matrix
HQ D [B1 A; I] D [A Q I], where I is the r ð r identity matrix.
nr ½
c1
Proof. Let c D be a code word corresponding to the matrix H D [A B], where
cnnr C1
cnr
1 are the first .n r/ components of the vector and cnnr C1 are the last r components
of the vector. Then
Hc D Acnr1 ý Bcnnr C1 D 0 (11.25)
B1 Acnr
1 ý Icnnr C1 D 0 (11.26)
Q D 0.
or Hc
Q D [A
From Property 2 we see that parity check matrices of the form H Q I] are not less
general than parity check matrices of the form H D [A B], where det[B] 6D 0. In general,
we can consider any r ð n matrix as a parity check matrix, provided that some set of r
columns has a non-zero determinant. If we are not concerned with the order by which the
elements of a code word are transmitted, then such a code would be equivalent to a code
formed by a parity check matrix of the form
H D [A I] (11.27)
The form of the matrix (11.27) is called canonical or systematic form. We assume that the
last r columns of H have a non-zero determinant and therefore that the parity check matrix
can be expressed in canonical form.
Property 3 of a parity check code. There are exactly 2nr D 2k code words in a parity
check code.
Proof. Referring to the proof of Property 2, we find that
cnnr C1 D Acnr
1 (11.28)
For each of the 2nr D 2k possible binary vectors cnr 1 it is possible to compute the
corresponding vector cnnr C1 . Each of these code words is unique as all of them differ in
the first .n r/ D k positions. Assume that there are more than 2k code words; then at
least two will agree in the first .n r/ D k positions. But from (11.28) we find that these
two code words also agree in the last r positions and therefore they are identical.
11.2. Block codes 835
c D [m 0 : : : m k1 ; p0 : : : pr 1 ]T (11.29)
where the first k D .n r/ bits are called information bits and the last r bits are called
parity check bits. As mentioned in Section 11.1, a parity check code that has code words
of length n that are obtained by encoding k information bits is an .n; k/ code.
Property 4 of a parity check code. A code word of weight w exists if and only if the
modulo 2 sum of w columns of H equals 0.
Proof. c is a code word if and only if Hc D 0. Let hi be the i-th column of H and let c j
be the j-th component of c. Therefore, if c is a code word, then
X
n
hjcj D 0 (11.30)
jD1
If c is a code word of weight w, then there are exactly w non-zero components of c, for
example c j1 ; c j2 ; : : : ; c jw . Consequently h j1 ýh j2 ýÐ Ð Ðýh jw D 0, thus a code word of weight
w implies that the sum of w columns of H equals 0. Conversely, if h j1 ýh j2 ýÐ Ð Ðýh jw D 0
then Hc D 0, where c is a binary vector with elements equal to 1 in positions j1 ; j2 ; : : : ; jw .
From Property 1 of a group code and also from Properties 1 and 4 of a parity check
code we obtain the following property.
Property 5 of a parity check code. A parity check code has minimum distance dmin H if
H
some modulo 2 sum of dmin columns of H is equal to 0, but no modulo 2 sum of fewer
H columns of H is equal to 0.
than dmin
Property 5 may be considered as the fundamental property of parity check codes, as it
forms the basis for the design of almost all such codes. An important exception is constituted
by low-density parity check codes, which will be discussed in Section 11.7. A limit on the
number of parity check bits required for a given block length n and given dmin H derives
Property 6 of a parity check code. A binary parity check code exists of block length n
and minimum distance dminH , having no more than r Ł parity check bits, where
0 H 1
dmin 2
j X n 1 k
r Ł D log2 @ A C1 (11.31)
i D0
i
Proof. The proof derives from the following exhaustive construction procedure of the parity
check matrix of the code.
Step 1: choose as the first column of H any non-zero vector with r Ł components.
Step 2: choose as the second column of H any non-zero vector different from the first.
836 Chapter 11. Channel codes
Step 3: choose as the i-th column of H any vector distinct from all vectors obtained by
modulo 2 sum of .dmin
H 2/ or fewer previously chosen columns.
columns of H sum to 0. However, we must show that we can indeed continue this process
for n columns. After applying this procedure for .n 1/ columns, there will be at most
n1 n1 n1
Nc .n H
1; dmin 2/ D 1 C C C ÐÐÐ C H 2 (11.32)
1 2 dmin
Ł
distinct vectors that are forbidden for the choice of the last column, but there are 2r vectors
Ł
to choose from; observing (11.31) and (11.32) we get 2r > Nc .n 1; dmin H 2/. Thus n
columns can always be chosen where no set of .dmin 1/ or fewer columns sums to zero.
H
From Property 5, the code therefore has minimum distance at least dmin H .
thus the code words, considered now as row vectors, are given as all linear combinations
of the rows of G, which is called the generator matrix of the code. A parity check code
can be specified by giving its parity check matrix H or its generator matrix G.
Example 11.2.2
Consider the parity check code (7,4) with the parity check matrix
2 3
1 1 0 1 1 0 0
H D 4 1 1 1 0 0 1 0 5 D [A I] (11.35)
1 0 1 1 0 0 1
Expressing a general code word according to (11.29), to every 4 information bits 3 parity
check bits are added, related to the information bits by the equations (see (11.28))
p0 D m 0 ý m 1 ý m 3
p1 D m 0 ý m 1 ý m 2 (11.36)
p2 D m 0 ý m 2 ý m 3
11.2. Block codes 837
There are 16 code words consisting of all linear combinations of the rows of G. By inspec-
tion, we find that the minimum weight of a non-zero code word is 3; hence, from (11.18)
H D 3 and therefore is a single error correcting code.
the code has dmin
Cosets
The 2n possible binary sequences of length n are partitioned into 2r sets, called cosets, by
a group code with 2k D 2nr code words; this partitioning is done as follows:
Step 1: choose the first set as the set of code words c1 ; c2 ; : : : ; c2k .
Step 2: choose any vector, say, η2 , that is not a code word; then choose the second set as
c 1 ý η 2 ; c 2 ý η 2 ; : : : ; c 2k ý η 2 .
Step i: choose any vector, say, ηi , not included in any previous set; choose the i-th set,
i.e. coset, as c1 ý ηi ; c2 ý ηi ; : : : ; c2k ý ηi . The partitioning continues until all 2n
vectors are used.
Note that each coset contains 2k vectors; if we show that no vector can appear in more
than one coset, we will have demonstrated that there are 2r D 2nk cosets.
Property 1 of cosets. Every binary vector of length n appears in one and only one coset.
Proof. Every vector appears in at least one coset as the partitioning stops only when all
vectors are used. Suppose that a vector appeared twice in one coset; then for some value
of the index i we have c j1 ý ηi D c j2 ý ηi , or c j1 D c j2 , that is a contradiction as all code
words are unique. Suppose that a vector appears in two cosets; then c j1 ý ηi1 D c j2 ý ηi2 ,
where we assume i 2 > i 1 . Then ηi2 D c j1 ý c j2 ý ηi1 D c j3 ý ηi1 , that is a contradiction as
ηi2 would have appeared in a previous coset, against the hypothesis.
838 Chapter 11. Channel codes
Example 11.2.3
Consider partitioning the 24 binary vectors of length 4 into cosets using the group code
with code words 0000, 0011, 1100, 1111, as follows:
0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1
η2 D 0 0 0 1 0 0 1 0 1 1 0 1 1 1 1 0
(11.38)
η3 D 0 1 1 1 0 1 0 0 1 0 1 1 1 0 0 0
η4 D 1 0 1 0 1 0 0 1 0 1 1 0 0 1 0 1
The vectors η1 D 0; η2 ; η3 ; : : : ; η2r , are called coset leaders; the partitioning (11.38) is
called coset table or decoding table.
Property 2 of cosets. Suppose that instead of choosing ηi as the coset leader of the i-th
coset, we choose another element of that coset; the new coset formed by using this new
coset leader contains exactly the same vectors as the old coset.
Proof. Assume that the new coset leader is ηi ý c j1 , and that z is an element of the new
coset; then z D ηi ý c j1 ý c j2 D ηi ý c j3 , so z is an element of the old coset. As the new and
the old cosets both contain 2k vectors and all vectors in a coset are unique, every element
of the new coset belongs to the old coset and vice versa.
Example 11.2.4
Suppose that in the previous example we had chosen the third coset leader as 0100; then
the table (11.38) would be
00 0 0 0 0 1 1 1 1 0 0 1 1 1 1
η2 D 0 0 0 1 0 0 1 0 1 1 0 1 1 1 1 0
(11.39)
η3 D 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1
η4 D 1 0 1 0 1 0 0 1 0 1 1 0 0 1 0 1
Proposition 11.1
Decoding using the decoding table decodes to the closest code word to the received word;
in case several code words are at the same smallest distance from the received word, it
decodes to one of these closest words.
11.2. Block codes 839
Proof. Assume that the received word is the j-th vector in the i-th coset. The received
word, given by z D c j ý ηi , is corrected to the code word c j and the distance between the
received word and the j-th code word is w.ηi /. Suppose that another code word, say, ck ,
is closer to the received vector: then
w.ck ý c j ý ηi / < w.ηi / (11.40)
or
w.c` ý ηi / < w.ηi / (11.41)
but this cannot be as w.ηi / is assumed to be the minimum weight vector in its coset and
c` ý ηi is in that coset.
We note that the coset leaders determine the only error patterns that can be corrected
by the code. Coset leaders, moreover, have many other interesting properties: for example,
if a code has minimum distance dminH , all binary n-tuple of weight less than or equal to
j H k
dmin 1
2 are coset leaders.
Definition 11.5
A code for which coset leaders are all vectors of weight t or less, and no others, is called a
perfect t-error correcting code. A code for which coset leaders are all vectors of weight t
or less, and some vectors of weight t C 1 but not all, and no others, is called quasi-perfect
t-error correcting code.
k D n r, r > 1; the columns of the matrix H are given by all non-zero vectors of
length r;
3. Golay code: t D 3 .dmin
H D 7/, n D 23, k D 12, r D 11.
The following modification of the decoding method dealt with in this section will be
useful later on:
Step 10 : locate the received vector in the coset table and identify the coset leader of the
coset containing that vector.
Step 20 : add the coset leader to the received vector to find the decoded code word.
Syndrome decoding
A third method of decoding is based on the concept of syndrome. Among the methods
described in this section, syndrome decoding is the only method of practical value for a
code with a large number of code words.
840 Chapter 11. Channel codes
Definition 11.6
For any parity check matrix H, we define the syndrome s.z/ of a binary vector z of length
n as
s.z/ D Hz (11.42)
Property 3 of cosets. All vectors in the same coset have the same syndrome; vectors in
different cosets have distinct syndromes.
Proof. Assume that z1 and z2 are in the same coset, say, the i-th: then z1 D ηi ý c j1 and
z2 D ηi ý c j2 . Moreover s.z1 / D Hz1 D H.ηi ý c j1 / D Hηi ý Hc j1 D Hηi ý 0 D s.ηi /.
Similarly s.z2 / D s.ηi /, so s.z1 / D s.z2 / D s.ηi /: this proves the first part of the property.
Now assume that z1 and z2 are in different cosets, say, the i 1 -th and i 2 -th: then z1 D
ηi1 ý c j1 and z2 D ηi2 ý c j2 , so s.z1 / D s.ηi1 / and s.z2 / D s.ηi2 /. If s.z1 / D s.z2 / then
s.ηi1 / D s.ηi2 /, which implies Hηi1 D Hηi2 . Consequently H.ηi1 ý ηi2 / D 0, or ηi1 ý ηi2 is
a code word, say c j3 . Then ηi2 D ηi1 ý c j3 , which implies that ηi1 and ηi2 are in the same
coset, that is a contradiction. Thus the assumption that s.z1 / D s.z2 / is incorrect.
From Property 3 we see that there is a one-to-one relation between cosets and syndromes;
this leads to the third method of decoding, which proceeds as follows:
Step 100 : compute the syndrome of the received vector; this syndrome identifies the coset
in which the received vector is. Identify then the leader of that coset.
Step 200 : add the coset leader to the received vector to find the decoded code word.
Example 11.2.5
Consider the parity check matrix
2 3
1 1 0 1 0 0
HD41 0 1 0 1 05 (11.43)
1 1 1 0 0 1
The coset leaders and their respective syndromes obtained using (11.42) are reported in
Table 11.1.
Suppose that the vector z D 000111 is received. To decode we 2 first
3 compute the syndrome
1
2 3 607
1 6 7
607
Hz D 4 1 5 , then by Table 11.1 we identify the coset leader as 6 7
607, and obtain the decoded
1 6 7
405
0
code word . 1 0 0 1 1 1 /.
11.2. Block codes 841
000000 000
000001 001
000010 010
000100 100
001000 011
010000 101
100000 111
100001 110
The advantage of syndrome decoding over the other decoding methods previously de-
scribed is that there is no need to memorize the entire decoding table at the receiver. The
first part of Step 100 , namely computing the syndrome, is trivial. The second part of Step 100 ,
namely identifying the coset leader corresponding to that syndrome, is the difficult part of
the procedure; in general it requires a RAM with 2r memory locations, addressed by the syn-
drome of r bits and containing the coset leaders of n bits. Overall the memory bits are n2r .
There is also an algebraic method to identify the coset leader. In fact, this problem is
equivalent to finding the minimum set of columns of the parity check matrix which sum to
the syndrome. In other words, we must find the vector z of minimum weight such that Hz D s.
For a single error correcting Hamming code, all coset leaders are of weight 1 or 0, so a
non-zero syndrome corresponds to a single column of H and the correspondence between
syndrome and coset leader is simple. For a code with coset leaders of weight 0, 1, or 2,
the syndrome is either 0, a single column of H, or the sum of two columns, etc.
For a particular class of codes that will be considered later, the structure of the con-
struction of H will allow identification of the coset leader starting from the syndrome by
using algebraic procedures. In general, each class of codes leads to a different technique to
perform this task.
Property 7 of parity check codes. There are exactly 2r correctable error vectors for a parity
check code with r parity check bits.
Proof. Correctable error vectors are given by the coset leaders and there are 2nk D 2r of
them, all of which are distinct. On the other hand, there are 2r distinct syndromes and each
corresponds to a correctable error vector.
For a binary symmetric channel (see Definition 6.1) we should correct all error vectors
of weight i, i D 0; 1; 2; : : : , until we exhaust the capability of the code. Specifically, we
should try to use a perfect code or a quasi-perfect code. For a quasi-perfect t-error correcting
code, the coset leaders consist of all error vectors of weight i D 0; 1; 2; : : : ; t, and some
vectors of weight t C 1.
Nonbinary parity check codes are discussed in Appendix 11.A.
842 Chapter 11. Channel codes
ax D b (11.44)
where a and b are known coefficients, and all values are from the finite alphabet f0; 1; 2,
: : : ; q 1g. First, we need to introduce the concept of multiplication, which is normally
given in the form of a multiplication table, as the one given in Table 11.2 for the three
elements f0; 1; 2g.
Table 11.2 allows us to solve (11.44) for any values of a and b, except a D 0. For
example, the solution to equation 2x D 1 is x D 2, as from the multiplication table we find
2 Ð 2 D 1.
Let us now consider the case of an alphabet with four elements. A multiplication table for
the four elements f0; 1; 2; 3g, resulting from the modulo 4 arithmetic, is given in Table 11.3.
Note that the equation 2x D 2 has two solutions, x D 1 and x D 3, and equation 2x D 1
has no solution. It is possible to construct a multiplication table that allows the equation
(11.44) to be solved uniquely for x, provided that a 6D 0, as shown in Table 11.4.
Ð 0 1 2
0 0 0 0
1 0 1 2
2 0 2 1
Ð 0 1 2 3
0 0 0 0 0
1 0 1 2 3
2 0 2 0 2
3 0 3 2 1
11.2. Block codes 843
Note that Table 11.4 is not obtained using modulo 4 arithmetic. For example, 2x D 3
has the solution x D 2, and 2x D 1 has the solution x D 3.
Modulo q arithmetic
Consider the elements f0; 1; 2; : : : ; q 1g, where q is a positive integer larger than or equal
to 2. We define two operations for combining pairs of elements from this set. The first,
denoted by ý, is called modulo q addition and is defined as
(
aCb if 0 a C b < q
c Daýb D (11.45)
aCbq if a C b ½ q
Here a C b is the ordinary addition operation for integers that may produce an integer not in
the set. In this case q is subtracted from a C b and a C b q is always an element in the set
f0; 1; 2; : : : ; q 1g. The second operation, denoted by , is called modulo q multiplication
and is defined as
8
>
< ab if 0 ab < q
d Dab D j ab k (11.46)
>
: ab q if ab ½ q
q
j k
Note that ab ab q q, is the remainder or residue of the division of ab by q, and is always
an integer in the set f0; 1; 2; : : : ; q 1g. Often we will omit the notation and write a b
simply as ab.
We recall that special names are given to sets which possess certain properties with
respect to operations. Consider the general set G that contains the elements fÞ; þ; ; Ž; : : : g,
and two operations for combining elements from the set. We denote the first operation 4
(addition), and the second operation ♦ (multiplication). Often we will omit the notation ♦
and write a♦b simply as ab. The properties we are interested in are:
11. Existence of multiplicative inverse. For every Þ 2 G, except the element ;, there
exists an element Ž 2 G, called multiplicative inverse of Þ, and indicated with Þ 1 ,
such that Þ♦Ž D Ž♦Þ D I .
Any set G for which Properties 14 hold is called a group with respect to 4. If G has a
finite number of elements, then G is called finite group and the number of elements of G
is called the order of G.
Any set G for which Properties 15 hold is called an Abelian group with respect to 4.
Any set G for which Properties 18 hold is called a ring with respect to the operations
4 and ♦.
Any set G for which Properties 19 hold is called a commutative ring with respect to
the operations 4 and ♦.
Any set G for which Properties 110 hold is called a commutative ring with identity.
Any set G for which Properties 111 hold is called a field.
It can be seen that the set f0; 1; 2; : : : ; q 1g is a commutative ring with identity with
respect to the operations of addition ý defined in (11.45) and multiplication defined in
(11.46). We will show by the next three properties that this set satisfies also Property 11 if
and only if q is a prime: in other words, we will show that the set f0; 1; 2; : : : ; q 1g is a
field with respect to the modulo q addition and modulo q multiplication if and only if q is
a prime.
Finite fields are called Galois fields; a field of q elements is usually denoted as G F.q/.
Property 11a of modulo q arithmetic. If q is not a prime, each factor of q (less than q
and greater than 1) does not have a multiplicative inverse.
Proof. Let q D ab, where 1 < a; b < q; then, observing (11.46), a b D 0. Assume
that a has a multiplicative inverse a 1 ; then a 1 .a b/ D a 1 0 D 0. Now, from
a 1 .a b/ D 0 it is 1 b D 0; this implies b D 0, which is a contradiction as b > 1.
Similarly we show that b does not have a multiplicative inverse.
11.2. Block codes 845
Property 11c of modulo q arithmetic. If q is a prime, all non-zero elements of the set
f0; 1; 2; : : : ; q 1g have multiplicative inverse.
Proof. Assume the converse, that is the element j, with 1 j q 1, does not have a
multiplicative inverse; then there must be two distinct elements a; b 2 f0; 1; 2; : : : ; q 1g
such that a j D b j. This is a consequence of the fact that the product i j can only
assume values in the set f0; 2; 3; : : : ; q 1g, as by assumption i j 6D 1; then
.a j/ ý .q .b j// D 0 (11.47)
On the other hand, q .b j/ D .q b/ j, and
.a ý .q b// j D 0 (11.48)
But j 6D 0 and consequently, by Property 11b, we have a ý .q b/ D 0. This implies
a D b, which is a contradiction.
Definition 11.7
An ideal I is a subset of elements of a ring R such that:
1. I is a subgroup of the additive group R, that is the elements of I form a group with
respect to the addition defined in R;
2. for any element a of I and any element r of R, ar and ra are in I .
f .y/ C g.y/ D . f 0 ý g0 / C . f 1 ý g1 /y C . f 2 ý g2 /y 2 C Ð Ð Ð C . f m ý gm / y m C Ð Ð Ð C f n y n
(11.50)
Example 11.2.6
Let p D 5, f .y/ D 1C3y C2y 4 , and g.y/ D 4C3y C3y 2 ; then f .y/Cg.y/ D y C3y 2 C2y 4 .
Note that
deg. f .y/ C g.y// max.deg. f .y//; deg.g.y///
Multiplication among polynomials is defined as usual
where the arithmetic to perform operations with the various coefficients is modulo p,
di D . f 0 gi / ý . f 1 gi 1 / ý Ð Ð Ð ý . f i 1 g1 / ý . f i g0 / (11.52)
Example 11.2.7
Let p D 2, f .y/ D 1 C y C y 3 , and g.y/ D 1 C y 2 C y 3 ; then f .y/ g.y/ D 1 C y C y 2 C
y3 C y4 C y5 C y6.
Note that
deg. f .y/ g.y// D deg. f .y// C deg.g.y//
Definition 11.8
If f .y/ g.y/ D d.y/, we say that f .y/ divides d.y/, and g.y/ divides d.y/. We say that
p.y/ is an irreducible polynomial if and only if, assuming another polynomial a.y/ divides
p.y/, then a.y/ D a 2 f0; 1; : : : ; p 1g or a.y/ D k p.y/, with k 2 f0; 1; : : : ; p 1g.
The concept of an irreducible polynomial plays the same role in the theory of polynomials
as does the concept of a prime number in the number theory.
Example 11.2.8
Let p D 2 and q.y/ D 1 C y C y 3 ; then the set P consists of 23 polynomials, f0; 1; y; y C
1; y 2 ; y 2 C 1; y 2 C y; y 2 C y C 1g.
Example 11.2.9
Let p D 3 and q.y/ D 2y 2 ; then the set P consists of 32 polynomials, f0; 1; 2; y; y C
1; y+2; 2y; 2y C 1; 2y C 2g.
We now define two operations among polynomials of the set P, namely modulo q.y/
addition, denoted by 4, and modulo q.y/ multiplication, denoted by ♦. Modulo q.y/
addition is defined for every pair of polynomials a.y/ and b.y/ from the set P as
a.y/4b.y/ D a.y/ C b.y/ (11.53)
where a.y/ C b.y/ is defined in (11.50).
The definition of modulo q.y/ multiplication requires the knowledge of the Euclidean
division algorithm.
Euclidean division algorithm. For every pair of polynomials Þ.y/ and þ.y/ with coef-
ficients from some field, and deg.þ.y// ½ deg.Þ.y// > 0, there exists a unique pair of
polynomials q.y/ and r.y/ such that
þ.y/ D q.y/ Þ.y/ C r.y/ (11.54)
where 0 deg.r.y// < deg.Þ.y//; polynomials q.y/ and r.y/ are called, respectively,
quotient polynomial and remainder or residue polynomial. In a notation analogous to that
used for integers we can write
j þ.y/ k
q.y/ D (11.55)
Þ.y/
and
j þ.y/ k
r.y/ D þ.y/ Þ.y/ (11.56)
Þ.y/
Example 11.2.10
Let p D 2, þ.y/ D y 4 C 1, and Þ.y/ D y 3 C y C 1; then y 4 C 1 D y.y 3 C y C 1/ C y 2 C y C 1,
so q.y/ D y and r.y/ D y 2 C y C 1.
We define modulo q.y/ multiplication, denoted by ♦, for polynomials a.y/ and b.y/ in
the set P as
8
>
< a.y/ b.y/ if deg.a.y/ b.y// < deg.q.y//
a.y/♦b.y/ D j a.y/ b.y/ k
>
: a.y/ b.y/ q.y/ otherwise
q.y/
(11.57)
848 Chapter 11. Channel codes
Example 11.2.11
Let p D 2 and q.y/ D 1 C y C y 3 ; then .y 2 C 1/♦.y C 1/ D y 3 C y 2 C y C 1 D
.1 y/ C y 2 C y C 1 D .1 C y/ C y 2 C y C 1 D y 2 .
It can be shown that the set of polynomials with coefficients from some field and degree less
than deg.q.y// is a commutative ring with identity with respect to the operations modulo
q.y/ addition and modulo q.y/ multiplication. We now find under what conditions this set
of polynomials and operations forms a field.
Property 11a of modular polynomial arithmetic. If q.y/ is not irreducible, then the factors
of q.y/, of degree greater than zero and less than deg.q.y//, do not have multiplicative
inverses.
Proof. Let q.y/ D a.y/ b.y/, where 0 < deg.a.y//; deg.b.y// < deg.q.y//; then
a.y/♦b.y/ D 0. Assume a.y/ has a multiplicative inverse, a 1 .y/; then, from a 1 .y/ ♦
.a.y/♦b.y// D a 1 .y/♦0 D 0 it is .a 1 .y/♦a.y//♦b.y/ D 0, then 1♦b.y/ D 0, or
b.y/ D 0. The last equation is a contradiction as by assumption deg.b.y// > 0. Similarly,
we show that b.y/ does not have a multiplicative inverse.
We give without proof the following properties.
Property 11c of modular polynomial arithmetic. If q.y/ is irreducible, all non-zero el-
ements of the set of polynomials P of degree less than deg.q.y// have multiplicative
inverses.
We now have that the set of polynomials with coefficients from some field and degree
less than deg.q.y// forms a field, with respect to the operations of modulo q.y/ addition
and modulo q.y/ multiplication, if and only if q.y/ is irreducible.
Furthermore it can be shown that there exists at least one irreducible polynomial of
degree m, for every m ½ 1, with coefficients from a generic field f0; 1; 2; : : : ; p 1g. We
now have a method of generating a field with pm elements.
Example 11.2.12
Let p D 2 and q.y/ D y 2 C y C 1; we have that q.y/ is irreducible. Consider the set P with
elements f0; 1; y; y C 1g. The addition and multiplication tables for these elements modulo
y 2 C y C 1 are given in Table 11.5 and Table 11.6, respectively.
11.2. Block codes 849
4 0 1 y yC1
0 0 1 y yC1
1 1 0 yC1 y
y y yC1 0 1
yC1 yC1 y 1 0
♦ 0 1 y yC1
0 0 0 0 0
1 0 1 y yC1
y 0 y yC1 1
yC1 0 yC1 1 y
and
X
m1
b.y/ D bi y i bi 2 G F. p/ (11.60)
i D0
a0 b0 am−1 bm−1
mod p mod p
s0 s m−1
Figure 11.1. Device for the sum of two elements .a0 ; : : : ; am1 / and .b0 ; : : : ; bm1 /
of GF.pm /.
Figure 11.2. Device for the multiplication of two elements .a0 ; : : : ; am1 / and .b0 ; : : : ; bm1 /
of GF.pm /. Tc is the clock period, and ACC denotes an accumulator. All additions and
multiplications are modulo p.
D a0 b.y/
(11.63)
C a1 .y b.y// mod q.y/
::
:
C am1 .y m1 b.y// mod q.y/
11.2. Block codes 851
Pm
where additions and multiplications are modulo p. Now, using the identity i D0 qi y i D
0 mod q.y/, note that the following relation holds:
D .bm1 qm1 q0 / C .b0 bm1 qm1 q1 / y C Ð Ð Ð C .bm2 bm1 qm1 qm1 / y m1 :
(11.64)
The term .y i b.y// mod q.y/ is thus obtained by initializing the SR of Figure 11.2 to the
sequence .b0 ; : : : ; bm1 /, and by applying i clock pulses; the desired result is then contained
in the shift register.
Observing (11.63), we find that it is necessary to multiply each element of the SR by ai
and accumulate the result; after multiplications by all coefficients fai g have been performed,
the final result is given by the content of the accumulators. Note that in the binary case,
for p D 2, the operations of addition and multiplication are carried out by XOR and AND
functions, respectively.
j times i times
(11.65)
Þ Þ Þ ÐÐÐ Þ D Þ Þ ÐÐÐ Þ D þ
We observe that
Substituting (11.65) in (11.66), and observing that þ has a multiplicative inverse, we can
multiply from the right by this inverse to obtain
j i times
(11.67)
Þ ji D Þ Þ Ð Ð Ð Þ D 1
Definition 11.9
For every non-zero field element, Þ, the order of Þ is the smallest integer ` such that
Þ ` D 1.
Example 11.2.13
Consider the field with elements f0; 1; 2; 3; 4g, and modulo 5 arithmetic. Then
element order
1 1
2 4 (11.68)
3 4
4 2
Example 11.2.14
Consider the field G F.22 / with 4 elements, f0; 1; y; y C 1g, and addition and multiplication
modulo y 2 C y C 1. Then
element order
1 1 (11.69)
y 3
yC1 3
6. An element from the field G F.q/ is said to be primitive if it has order q 1. For fields
generated by arithmetic modulo a polynomial q.y/, if the field element y is primitive we
say that q.y/ is a primitive irreducible polynomial.
A property of finite fields that we give without proof is that every finite field has at least
one primitive element; we note that once a primitive element has been identified, every
other non-zero field element can be obtained by multiplying the primitive element by itself
an appropriate number of times. A list of primitive polynomials for the ground field G F.2/
is given in Table 11.7.
Example 11.2.15
For the field G F.4/ generated by the polynomial arithmetic modulo q.y/ D y 2 C y C 1,
for the ground field G F.2/, y is a primitive element (see (11.69)); thus y 2 C y C 1 is a
primitive polynomial.
m m
2 1 C y C y2 14 1 C y C y 6 C y 10 C y 14
3 1 C y C y3 15 1 C y C y 15
4 1 C y C y4 16 1 C y C y 3 C y 12 C y 16
5 1 C y2 C y5 17 1 C y 3 C y 17
6 1 C y C y6 18 1 C y 7 C y 18
7 1 C y3 C y7 19 1 C y C y 2 C y 5 C y 19
8 1 C y2 C y3 C y4 C y8 20 1 C y 3 C y 20
9 1 C y4 C y9 21 1 C y 2 C y 21
10 1 C y 3 C y 10 22 1 C y C y 22
11 1 C y 2 C y 11 23 1 C y 5 C y 23
12 1 C y C y 4 C y 6 C y 12 24 1 C y C y 2 C y 7 C y 24
13 1 C y C y 3 C y 4 C y 13
Proof. Every non-zero element þ can be written as the power of a primitive element Þ p ;
this implies that there is some i < .q 1/ such that
i times
(11.70)
þ D Þ p Þ p Ð Ð Ð Þ p D Þ ip
q1 j
Note that from the definition of a primitive element we get Þ p D 1, but Þ p 6D 1 for
j < .q 1/; furthermore there exists an integer ` such that þ ` D Þ ip` D 1. Consequently
.`/.i/ is a multiple of .q 1/ and it is exactly the smallest multiple of i that is a multiple
of .q 1/, thus .i/.`/ D l:c:m:.i; q 1/, i.e. the least common multiple of i and .q 1/.
We recall that
ab
l:c:m:.a; b/ D (11.71)
g:c:d:.a; b/
.i/.q 1/
.i/.`/ D (11.72)
g:c:d:.i; q 1/
and
q 1
D` (11.73)
g:c:d:.i; q 1/
Example 11.2.16
Let Þ p be a primitive element of G F.16/; from (11.73) the orders of the non-zero field
elements are:
854 Chapter 11. Channel codes
8. A ground field can itself be generated as an extension field. For example G F.16/ can be
generated by taking an irreducible polynomial of degree 4 with coefficients from G F.2/,
which we would call G F.24 /, or by taking an irreducible polynomial of degree 2 with
coefficients from G F.4/, which we would call G F.42 /. In either case we would have the
same field, except for the names of the elements.
Example 11.2.17
Consider the field G F.23 / generated by the primitive polynomial q.y/ D 1 C y C y 3 , with
ground field G F.2/. As q.y/ is a primitive polynomial, each element of G F.23 /, except the
zero element, can be expressed as a power of y. Recalling the polynomial representation P,
we may attach to each polynomial a vector representation, with m components on G F. p/
given by the coefficients of the powers of the variable y. The three representations are
reported in Table 11.8.
Roots of a polynomial
Consider a polynomial of degree m with coefficients that are elements of some field. We
will use the variable x, as the polynomials are now considered for a purpose that is not that
of generating a finite field. In fact, the field of the coefficients may itself have a polynomial
representation.
11.2. Block codes 855
0 0 0 0 0
1 1 1 0 0
y y 0 1 0
y2 y2 0 0 1
y3 1Cy 1 1 0
y4 y C y2 0 1 1
y5 1 C y C y2 1 1 1
y6 1 C y2 1 0 1
.x4Þ/♦.x4þ/ D x 2 C .Þ C þ/ x C Þþ D x 2 C x C 1 (11.75)
856 Chapter 11. Channel codes
Thus if we use the operations defined in G F.4/, .x CÞ/ and .x Cþ/ are factors of x 2 Cx C1;
it remains that x 2 C x C 1 is irreducible as it has no factors with coefficients from G F.2/.
But f . / D 0, so
f . / D 0 D Q. / . / C r0 D r0 (11.78)
therefore
f .x/ D Q.x/ .x / (11.79)
as the cross-terms contain a factor p, which is the same as 0 in G F. p/. On the other hand,
p
for f i 6D 0, f i D f i , as from Property 7 on page 853 the order of any non-zero element
divides p 1; the equation is true also if f i is the zero element. Therefore
. f .x// p D f .x p / (11.81)
Example 11.2.18
Consider the polynomial x 2 C x C 1 with coefficients from G F.2/. We already have seen
that Þ, element of G F.4/, is a root of x 2 C x C 1 D 0. Therefore Þ 2 is also a root; but
Þ 2 D þ, so þ is a second root. The polynomial has degree two, thus it has two roots and
they are Þ and þ, as previously seen. Note also that þ 2 is also a root, but þ 2 D Þ.
Minimum function
Definition 11.10
Let þ be an element of an extension field of G F.q/; the minimum function of þ, m þ .x/, is the
monic polynomial of least degree with coefficients from G F.q/ such that m þ .x/jxDþ D 0.
but as f .þ/ D 0 and m þ .þ/ D 0, then r.þ/ D 0. As deg.r.x// < deg.m þ .x//, the only
possibility is r.x/ D 0; thus f .x/ D Q.x/ m þ .x/.
4. Let f .x/ be any irreducible monic polynomial with coefficients from G F.q/ for which
f .þ/ D 0, where þ is an element of some extension field of G F.q/; then f .x/ D m þ .x/.
Proof. From Property 3 f .x/ must be divisible by m þ .x/, but f .x/ is irreducible, so it is
only trivially divisible by m þ .x/, that is f .x/ D K m þ .x/: but f .x/ and m þ .x/ are both
monic polynomials, therefore K D 1.
We now introduce some interesting propositions.
1. Let þ be an element of G F.q m /, with q prime; then the polynomial F.x/, defined as
Y
m1
i 2 m1
F.x/ D .x þ q / D .x þ/ .x þ q / .x þ q / Ð Ð Ð .x þ q / (11.84)
i D0
Y
m
i
F.x/ D .x þ q / (11.85)
i D1
therefore
Y
m
i Y
m
i1 Y
m1
j
F.x q / D .x q þ q / D .x þ q /q D .x þ q /q D .F.x//q (11.86)
i D1 i D1 jD0
Pm
Consider now the expression F.x/ D i D0 f i x i ; then
X
m
q
F.x q / D fi x i (11.87)
i D0
and
!q
X
m X
m
q q
.F.x//q D fi x i D fi x i (11.88)
i D0 i D0
11.2. Block codes 859
q
Equating like coefficients in (11.87) and (11.88) we get f i D f i ; hence f i is a root of the
equation x q x D 0. But on the basis of Property 7 on page 853 the q elements from
G F.q/ all satisfy the equation x q x D 0, and this equation only has q roots; therefore
the coefficients f i are elements from G F.q/.
2. If g.x/ is an irreducible polynomial of degree m with coefficients from G F.q/, and
2
g.þ/ D 0, where þ is an element of some extension field of G F.q/, then þ; þ q ; þ q , : : : ,
m1
þq are all the roots of g.x/.
Proof. At least one root of g.x/ is in G F.q m /; this follows by observing that, if we form
G F.q m / using the arithmetic modulo g.y/, then y will be a root of g.x/ D 0. From
Q qi
Proposition 1, if þ is an element from G F.q m / then F.x/ D im1 D0 .x þ / has all
coefficients from G F.q/; thus F.x/ has degree m, and F.þ/ D 0. As g.x/ is irreducible,
we know that g.x/ D K m þ .x/; but as F.þ/ D 0, and F.x/ and g.x/ have the same degree,
2 m1
then F.x/ D K 1 m þ .x/, and therefore g.x/ D K 2 F.x/. As þ; þ q ; þ q ; : : : ; þ q , are all
roots of F.x/, then they must also be all the roots of g.x/.
3. Let g.x/ be a polynomial with coefficients from G F.q/ which is also irreducible in this
field. Moreover, let g.þ/ D 0, where þ is an element of some extension field of G F.q/;
then the degree of g.x/ equals the smallest integer k such that
k
þq D þ (11.89)
2 k1
Proof. We have that deg.g.x// ½ k as þ; þ q ; þ q ; : : : ; þ q , are all roots of g.x/ and by
assumption are distinct. Assume that deg.g.x// > k; from Proposition 2, we know that þ
must be at least a double root of g.x/ D 0, and therefore g 0 .x/ D .d=dx/g.x/ D 0 must
also have þ as a root. As g.x/ is irreducible we have that g.x/ D K m þ .x/, but m þ .x/
must divide g 0 .x/; we get a contradiction because deg.g 0 .x// < deg.g.x//.
Example 11.2.19
Consider the field G F.23 / obtained by taking the polynomial arithmetic modulo the ir-
reducible polynomial y 3 C y C 1 with coefficients from G F.2/; the field elements are
f0; 1; y; y C 1; y 2 ; y 2 C 1; y 2 C y; y 2 C y C 1g. Assume we want to find the minimum func-
tion of þ D .y C 1/. If .y C 1/ is a root, also .y C 1/2 D y 2 C 1 and .y C 1/4 D y 2 C y C 1
are roots. Note that .y C 1/8 D .y C 1/ D þ, thus the minimum function is
m yC1 .x/ D .x þ/ .x þ 2 / .x þ 4 /
D x3 C x2 C 1
Example 11.2.20
Consider the field G F.23 / of the previous example; as .y C 1/, .y C 1/2 D y 2 C 1,
.y C 1/4 D y 2 C y C 1, .y C 1/8 D y C 1, the minimum function has degree three; as the
minimum function is monic and irreducible, we have
As m yC1 .y C 1/ D 0, then
y 2 .1 C m 2 / C ym 1 C .m 2 C m 1 C 1/ D 0 (11.93)
As all coefficients of the powers of y must be zero, we get a system of equations in the
unknown m 1 and m 2 , whose solution is given by m 1 D 0 and m 2 D 1. Substitution of this
solution in (11.91) yields
Definition 11.11
The reciprocal polynomial of any polynomial m Þ .x/ D m 0 C m 1 x C m 2 x 2 C Ð Ð Ð C m K x K
is defined by m Þ .x/ D m 0 x K C m 1 x K 1 C Ð Ð Ð C m K 1 x C m K .
Example 11.2.21
Consider the field G F.26 / obtained by taking the polynomial arithmetic modulo the irre-
ducible polynomial y 6 C y C 1 with coefficients from G F.2/; the polynomial y 6 C y C 1 is
primitive, thus from Property 7 on page 853 any non-zero field element can be written as a
power of the primitive element y. From Proposition 2, we have that the minimum function
of y is also the minimum function of y 2 ; y 4 ; y 8 ; y 16 ; y 32 , the minimum function of y 3 is
also the minimum function of y 6 ; y 12 ; y 24 ; y 48 ; y 33 , and so forth. We list in Table 11.11
the powers of y that have the same minimum function.
Given the minimum function of y 11 , m y 11 D x 6 C x 5 C x 3 C x 2 C 1, we want to find
the minimum function of y 13 . From Table 11.11 we note that y 13 has the same mini-
mum function as y 52 ; furthermore we note that y 52 is the multiplicative inverse of y 11 , as
.y 11 /.y 52 / D y 63 D 1. Therefore the minimum function of y 13 is the reciprocal polynomial
of m y 11 , given by m y 13 D x 6 C x 4 C x 3 C x C 1.
11.2. Block codes 861
1 2 4 8 16 32
3 6 12 24 48 33
5 10 20 40 17 34
7 14 28 56 49 35
9 18 36
11 22 44 25 50 37
13 26 52 41 19 38
15 30 60 57 51 39
21 42
23 46 29 58 53 43
27 54 45
31 62 61 59 55 47
Proof.
a) Noting that g.x/ must be divisible by each of the minimum functions, it must be the
smallest degree monic polynomial divisible by m þ1 .x/; m þ2 .x/; : : : , m þ L .x/, but this
is just the definition of the least common multiple.
b) If all the minimum functions are distinct, as each is irreducible, the least common
multiple is given by the product of the polynomials.
Many such factorizations are possible for a given polynomial x n 1; we will consider
any one of them. We denote the degrees of g.x/ and h.x/ as r and k, respectively; thus
n D k C r. The choice of the symbols n, k and r is intentional, as they assume the same
meaning as in the previous sections.
The polynomial arithmetic modulo q.x/ D x n 1 is particularly important in the dis-
cussion of cyclic codes.
Proposition 11.2
Consider the set of all polynomials of the form c.x/ D a.x/ g.x/ modulo q.x/, as a.x/
ranges over all polynomials of all degrees with coefficients from G F.q/. This set must be
finite as there are at most q n remainder polynomials that can be obtained by dividing a
polynomial by x n 1. Now we show that there are exactly q k distinct polynomials.
Proof. There are at least q k distinct polynomials a.x/ of degree less than or equal to k 1,
and each such polynomial leads to a distinct polynomial a.x/ g.x/. In fact, as the degree
11.2. Block codes 863
Example 11.2.22
Let g.x/ D x C 1, G F.q/ D G F.2/, and n D 4; then all polynomials a.x/ g.x/ modulo
x 4 1 D x 4 C 1 are given by
a.x/ a.x/ g.x/ mod .x 4 1/ code word
0 0 0000
1 x C1 1100
x x2 C x 0110
x C1 x2 C 1 1010 (11.99)
x2 x3 C x2 0011
x2 C 1 x3 C x2 C x C 1 1111
x2 C x x3 C x 0101
2
x Cx C1 x3 C 1 1001
We associate with any polynomial of degree less than n and coefficients from G F.q/ a
vector of length n with components equal to the coefficients of the polynomial, that is
f .x/ D f 0 C f 1 x C f 2 x 2 C Ð Ð Ð C f n1 x n1 ! f D . f 0 ; f 1 ; f 2 ; : : : ; f n1 / (11.100)
Note that in the definition f n1 does not need to be non-zero.
We can now define cyclic codes. The code words will be the vectors associated with a
set of polynomials; alternatively, we speak of the polynomials themselves as being code
words or code polynomials (see (11.99)).
Definition 11.12
Choose a field G F.q/, a positive integer n and a polynomial g.x/ with coefficients from
G F.q/ such that x n 1 D g.x/ h.x/; furthermore, let deg.g.x// D r D n k. Words
of a cyclic code are the vectors of length n that are associated with all multiples of g.x/
reduced modulo x n 1. In formulae: c.x/ D a.x/ g.x/ mod .x n 1/, for a.x/ polynomial
with coefficients from G F.q/.
The polynomial g.x/ is called a generator polynomial.
864 Chapter 11. Channel codes
Proof. The all zero word is a code word as 0 g.x/ D 0; any multiple of a code word is a
code word, as if a1 .x/ g.x/ is a code word so is Þa1 .x/ g.x/. Let a1 .x/ g.x/ and a2 .x/ g.x/
be two code words; then
Þ1 a1 .x/ g.x/ C Þ2 a2 .x/ g.x/ D .Þ1 a1 .x/ C Þ2 a2 .x//g.x/ D a3 .x/ g.x/ (11.101)
is a code word.
Proof. It is enough to show that if c.x/ D c0 Cc1 x CÐ Ð ÐCcn2 x n2 Ccn1 x n1 corresponds
to a code word, then also cn1 C c0 x C Ð Ð Ð C cn3 x n2 C cn2 x n1 corresponds to a code
word. But if c.x/ D a.x/ g.x/ D c0 C c1 x C Ð Ð Ð C cn2 x n2 C cn1 x n1 mod.x n 1/, then
xc.x/ D xa.x/ g.x/ D cn1 C c0 x C Ð Ð Ð C cn3 x n2 C cn2 x n1 mod.x n 1/.
Example 11.2.23
Let G F.q/ D G F.2/, g.x/ D x C 1, and n D 4. From the previous example we obtain the
code words, which can be grouped by the number of cyclic shifts.
Proof. If c.x/ is a code polynomial, then c.x/ D a.x/g .x/ mod.x n 1/, but h.x/ c.x/ D
h.x/ a.x/ g.x/ D a.x/ .g.x/h.x// D a.x/.x n 1/ D 0 mod.x n 1/.
Assume now h.x/ c.x/ D 0 mod.x n 1/; then h.x/ c.x/ D Q.x/.x n 1/ D Q.x/
h.x/ g.x/, or c.x/ D Q.x/ g.x/, therefore c.x/ is a code polynomial.
generator matrix
2 3
g0 g1 g2 : : : gr 0 0 : : : 0
6 0 g0 g1 : : : gr 1 gr 0 : : : 0 7
GD6
4
7
5 (11.103)
0 0 0 ::: : : : gr
Proof. We show that G is the generator matrix. The first row of G corresponds to the
polynomial g.x/, the second to xg.x/ and the last row to x k1 g.x/, but the code words
are all words of the form
X
i X
n1
c j hi j C c j h nCi j D 0 i D 0; 1; 2; : : : ; n 1 (11.110)
jD0 jDi C1
866 Chapter 11. Channel codes
X
n1
c j h n1 j D 0 (11.111)
jD0
or [h n1 h n2 : : : h 1 h 0 ] [c0 c1 : : : cn1 ]T D 0.
For i D n 2, (11.110) becomes
X
n2
c j h n2 j C cn1 h n1 D 0 (11.112)
jD0
or [h n2 h n3 : : : h 0 h n1 ] [c0 c1 : : : cn1 ]T D 0.
After r steps, for i D n r, (11.110) becomes
X
nr X
n1
c j h nr j C c j h 2nr j D 0 (11.113)
jD0 jDnr C1
or [h nr h nr 1 : : : h nr C2 h nr C1 ] [c0 c1 : : : cn1 ]T D 0. The r equations can be written
in matrix form as
2 32 3 2 3
h n1 h n2 ::: h1 h0 c0 0
6 h n2 h n3 ::: h0 h n1 76 c1 7 6 0 7
6 76 7 6 7
6 :: :: 76 :: 7D6 :: 7 (11.114)
4 : : 54 : 5 4 : 5
h nr h nr 1 : : : h nr C2 h nr C1 cn1 0
therefore all code words are solutions of the equation Hc D 0, where H is given by (11.104).
It still remains to be shown that all solutions of the equation Hc D 0 are code words. As
h n1 D h n2 D Ð Ð Ð D h nr C1 D 0, and h 0 6D 0, from (11.104) H has rank r, and can be
written as H D [A B], where B is an r ð r matrix with non-zero determinant; therefore
2 3 2 3
ck c0
6 ckC1 7 6 7 c1
6 7 6 7
6 :: 7 D B1 A 6 7 :: (11.115)
4 : 5 4 5 :
cn1 ck1
so there are q k D q nr solutions of the equation Hc D 0. As there are q k code words, all
solutions of the equation Hc D 0 are the code words in the cyclic code.
Example 11.2.24
Let q D 2 and n D 7. As x 7 1 D x 7 C 1 D .x 3 C x C 1/.x 3 C x 2 C 1/.x C 1/, we can
choose g.x/ D x 3 C x C 1 and h.x/ D .x 3 C x 2 C 1/.x C 1/ D x 4 C x 2 C x C 1; thus the
11.2. Block codes 867
Note that the columns of H are all possible non-zero vectors of length 3, so the code is a
Hamming single error correcting (7,4) code.
6. In a code word, any string of r consecutive symbols, even taken cyclically, can identify
the check positions.
Proof. From (11.115) it follows that the last r positions can be check positions. Now, if
we cyclically permute every code word of m positions, the resultant words are themselves
code words; thus the r check positions can be cyclically permuted anywhere in the code
words.
7. As the r check positions can be the first r positions, a simple encoding method in
canonical form is given by the following steps.
Step 1: represent the k information bits by the coefficients of the polynomial m.x/ D
m 0 C m 1 x C Ð Ð Ð C m k1 x k1 .
Step 2: multiply m.x/ by x r to obtain x r m.x/.
Step 3: divide x r m.x/ by g.x/ to obtain the remainder r.x/ D r0 C r1 x C Ð Ð Ð C rr 1 x r 1 .
Step 4: form the code word c.x/ D .x r m.x/ r.x//; note that the coefficients of .r.x//
are the parity check bits.
Proof. To show that .x r m.x/ r.x// is a code word, we must prove that it is a multiple
of g.x/: from Step 3 we obtain
so that
.x r m.x/ r.x// D Q.x/ g.x/ (11.119)
Example 11.2.25
Let g.x/ D 1 C x C x 3 , for q D 2 and n D 7. We report in Table 11.12 the message words
.m 0 ; : : : ; m 3 / and the corresponding code words .c0 ; : : : ; c6 / obtained by the generator
polynomial according to Definition 11.12 on page 863 for a.x/ D m.x/; the same code in
canonical form, obtained by (11.119), is reported in Table 11.13.
868 Chapter 11. Channel codes
g.x/ D gr x r C gr 1 x r 1 C Ð Ð Ð C g1 x C g0 (11.120)
then
x r D gr1 .gr 1 x r 1 C gr 2 x r 2 C Ð Ð Ð C g1 x C g0 / mod g.x/ (11.121)
and
b.x/ D b0 C b1 x C Ð Ð Ð C br 1 x r 1 (11.123)
g0 g
1
g r−1 g r−1
Tc Tc Tc
IN
mi
OUT
Figure 11.3. Scheme of an encoder for cyclic codes using a shift register with r elements.
870 Chapter 11. Channel codes
Tc Tc Tc Tc
OUT
IN
Figure 11.4. Scheme of an encoder for cyclic codes using a shift register with k elements.
Proposition 11.3
All polynomials corresponding to vectors in the same coset have the same remainder if
they are divided by g.x/; polynomials corresponding to vectors in different cosets have
different remainders if they are divided by g.x/.
and
As upon dividing a j1 g.x/ and a j2 g.x/ by g.x/ we get 0 as a remainder, the division of
z 1 .x/ and z 2 .x/ by g.x/ gives the same remainder, namely the polynomial ri .x/, where
Now assume z 1 .x/ and z 2 .x/ are in different cosets, say, the i 1 -th and i 2 -th cosets, but
have the same remainder, say, r0 .x/, if they are divided by g.x/; then the coset leaders
i1 .x/ and i2 .x/ of these cosets must give the same remainder r0 .x/ if they are divided
by g.x/, i.e.
g0 g
1
g r−1 g r−1
IN Tc Tc Tc
Figure 11.5. Device to compute the division of the polynomial z.x/ D z0 Cz1 xCÐ Ð ÐCzn1 xn1
by g.x/. After n clock pulses the r storage elements contain the remainder r0 ; r1 ; : : : ; rr1 .
and
therefore we get
i2 .x/ D i1 .x/ C .Q 2 .x/ Q 1 .x//g.x/ D i1 .x/ C Q 3 .x/ g.x/ (11.133)
This implies that i1 .x/ and i2 .x/ are in the same coset, which is a contradiction.
This result leads to the following decoding method for cyclic codes.
Step 1: compute the remainder upon dividing the received polynomial z.x/ of degree n 1
by g.x/, for example, by the device of Figure 11.5 (see (11.124)), by presenting
at the input the sequence of received symbols, and applying n clock pulses. The
remainder identifies the coset leader of the coset where the received polynomial is
located.
Step 2: subtract the coset leader from the received polynomial to obtain the decoded code
word.
Hamming codes
Hamming codes are binary cyclic single error correcting codes. We consider cyclic codes
over G F.2/, where g.x/ is an irreducible polynomial of degree r such that g.x/ divides
x 2 1 1, but not x ` 1 for ` < 2r 1.
r
k D 2r 1 r.
Proposition 11.4
H D 3 and therefore is a single error correcting code.
This code has minimum distance dmin
Assume that x i D 0 mod g.x/, for some 0 i n 1; then x i D Q.x/ g.x/, which is
impossible since g.x/ is not divisible by x.
Now assume that x i and x j give the same remainder upon division by g.x/, and that
0 i < j n 1; then
but g.x/ does not divide x i , so it must divide .x ji 1/. But 0 < j i n 1 and by
assumption g.x/ does not divide this polynomial. Hence dmin H ½ 3.
By the limit (11.15) we know that for a code with fixed n and k the following inequality
holds: 2 3
n
n n
2k 41 C C C Ð Ð Ð C j d H 1 k 5 2n (11.135)
1 2 min
2
As n D 2r 1 and k D n r, we have
2 3
r r 2r 1
41 C 2 1 2 1
C C Ð Ð Ð C j d H 1 k 5 2r (11.136)
1 2 min
2
but r
2 1
1C D 2r (11.137)
1
H 3.
and therefore dmin
We have seen in the previous section how to implement an encoder for a cyclic code.
We consider now the decoder device of Figure 11.6, whose operations are described as
follows.
1. Initially all storage elements of the register contain zeros and the switch SW is in
position 0. The received n-bit word z D .z 0 ; : : : ; z n1 / is sequentially clocked into
the lower register, with n storage elements, and into the feedback register, with r
storage elements, whose content is denoted by r0 ; r1 ; : : : ; rr 1 .
2. After n clock pulses, the behavior of the decoder depends on the value of v: if v D 0,
the switch SW remains in the position 0 and both registers are clocked once. This
procedure is repeated until v D 1, which occurs for r0 D r1 D Ð Ð Ð D rr 2 D 0; then
SW moves to position 1 and the content of the last stage of the feedback shift register
is added modulo 2 to the content of the last stage of the lower register; both registers
are then clocked until the n bits of the entire word are obtained at the output of the
decoder. Overall, 2n clock pulses are needed.
874 Chapter 11. Channel codes
0 (v=0)
SW
1 (v=1)
g g2 g r−1
1
x0 x1 x r−2 x r−1
Tc Tc Tc Tc
NOR
v
z
IN
x0 x1 x n−1
Tc Tc Tc ^c
OUT
Figure 11.6. Scheme of a decoder for binary cyclic single error correcting codes (Hamming
codes). All operations are in GF.2/.
We now illustrate the procedure of the scheme of Figure 11.6. First of all we note that for
the first n clocks the device coincides with that of Figure 11.3, hence the content of the
shift register is given by
r.x/ D x r z.x/ mod g.x/ (11.138)
1. The received word is correct, z.x/ D c.x/. After the first n clock pulses, from (11.138)
we have
r.x/ D x r c.x/ D x r a.x/ g.x/ D 0 mod g.x/ (11.139)
and thus
v D 1 and rr 1 D 0 (11.140)
cOi D z i C 0 i D 0; : : : ; n 1 (11.141)
therefore cO D c.
2. The received word is affected by one error, z.x/ D c.x/ C x i . In other words we assume
that there is a single error in the i-th bit, 0 i n 1.
After the first n clock pulses, it is
If i D n 1, we have
r.x/ D x r x n1
D x n x r 1
D .x n 1/ x r 1 C x r 1 (11.143)
This leads to switching SW , therefore during the last n clock pulses we have
thus only at the (n C j 1)-th clock pulse the condition (11.144) that forces to switch
SW from 0 to 1 occurs; therefore, at the next clock pulse the received bit in error will be
corrected.
where within the two ‘1’s the values can be either ‘0’ or ‘1’.
Then we can write the vector e in polynomial form,
n k r dmin
7 3 4 4
15 4 11 8
31 5 26 16
63 6 57 32
127 7 120 64
of all non-zero code words is equal to the same constant. We show that in the binary case,
for these codes the non-zero code words are related to the PN sequences of Appendix 3.A.
Let n D q k 1, and x n 1 D g.x/ h.x/, where we choose h.x/ as a primitive polynomial
of degree k; then the resultant code has minimum distance
H
dmin D .q 1/ q k1 (11.149)
The parameters of some binary codes in this class are listed in Table 11.14.
To show that these codes have minimum distance given by (11.149), first we prove the
following:
Assume the converse is true, that is x i g.x/ D x j g.x/ mod.x n 1/; then
or
But this is impossible since h.x/ is a primitive polynomial of degree k and cannot divide
.x ji 1/, as . j i/ < n D .q k 1/.
Relation (11.150) implies that all cyclic shifts of the code polynomial g.x/ are unique,
but there are n D .q k 1/ cyclic shifts. Furthermore we know that there are only q k code
words and one is the all-zero word; therefore all cyclic shifts of g.x/ are all the non-zero
code words and they all have the same weight.
Recall Property 2 of a group code (see page 832), that is if all code words of a linear
code are written as rows of a matrix, every column is either formed by all zeros, or it
consists of each field element repeated an equal number of times. If we apply this result to
11.2. Block codes 877
a simplex code, we find that no column can be all zero as the code is cyclic, so the sum
of the weights of all code words is given by
qk
sum of weights D n.q 1/ D .q k 1/ .q 1/ q k1 (11.153)
q
But there are .q k 1/ non-zero code words, all of the same weight; the weight of each
word is then given by
weight of non-zero code words D .q 1/ q k1 (11.154)
Therefore the minimum weight of the non-zero code words is given by
H
dmin D .q 1/ q k1 (11.155)
Example 11.2.26
Let q D 2, n D 15, and k D 4; hence r D 11, and dmin H D 8. Choose h.x/ as a primitive
Relation to PN sequences
We consider a periodic binary sequence of period L, given by : : : ; p.1/; p.0/, p.1/, : : : ,
with p.`/ 2 f0; 1g. We define the normalized autocorrelation function of this sequence as
" #
1 X
L1
r p .m/ D L 2 . p.`/ ý p.` m// (11.158)
L `D0
Note that with respect to (3.302), now p.`/ 2 f0; 1g rather than p.`/ 2 f1; 1g.
Theorem 11.1
If the periodic binary sequence f p.`/g is formed by repeating any non-zero code word of
a simplex binary code of length L D n D 2k 1, then
8
<1 m D 0; šL ; š2L ; : : :
r p .m/ D (11.159)
: 1 otherwise
L
878 Chapter 11. Channel codes
Proof. We recall that for a simplex binary code all non-zero code words
a) have weight 2k1 ,
b) are cyclic permutations of the same code word.
As the code is linear, the Hamming distance between any code word and a cyclic permu-
tation of this word is 2k1 ; this means that for the periodic sequence formed by repeating
any non-zero code word we obtain
(
X
L1 0 m D 0; šL ; š2L ; : : :
. p.`/ ý p.` m// D (11.160)
`D0 2k1 otherwise
If we recall the implementation of Figure 11.4, we find that the generation of such se-
quences is easy. We just need to determine the shift register associated with h.x/, load it with
anything except all zeros, and let it run. For example, choosing h.x/ D x 4 C x C 1, we get
the PN sequence of Figure 3.41, as illustrated in Figure 11.7, where L D n D 24 1 D 15.
Using this method we see that c.x/ D c0 C cx C Ð Ð Ð C cn1 x n1 is a code polynomial if
and only if c.Þ1 / D c.Þ2 / D Ð Ð Ð D c.Þ L / D 0; thus
2 3
2 3 c 2 3
.Þ1 /0 .Þ1 /1 .Þ1 /2 : : : .Þ1 /n1 6 0 7 0
c
6 .Þ2 /0 .Þ2 /1 .Þ2 /2 : : : .Þ2 /n1 7 6 1 7 6 0 7
6 7 6 c2 7 6 7
6 :: :: 76 7D6 : 7 (11.162)
4 : : 5 6 :: 7 4 :: 5
4 : 5
.Þ L /0 .Þ L /1 .Þ L /2 : : : .Þ L /n1 0
cn1
All vectors c D [c0 ; c1 ; : : : ; cn1 ]T with elements from G F.q/ that are solutions of this
set of equations, where operations are performed according to the rules of G F.q m /, are
11.2. Block codes 879
0 0 0 0 1
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 1 0 0 1
5 1 1 0 0
6 0 1 1 0
7 1 0 1 1
8 0 1 0 1
9 1 0 1 0
10 1 1 0 1
11 1 1 1 0
12 1 1 1 1
13 0 1 1 1
14 0 0 1 1
15 0 0 0 1
code words. The form of (11.162) resembles equation (11.20), where H is the generalized
parity check matrix. One obvious difference is that in (11.20) H and c have elements from
the same field, whereas this does not occur for the vector equation (11.162). However,
this difference is not crucial as each element from G F.q m / can be written as a vector of
length m with elements from G F.q/. Thus each element .Þi / j in the matrix is replaced by
a column vector with m components. The resultant matrix, with Lm rows and n columns,
consists of elements from G F.q/ and is therefore just a generalized parity check matrix
for the considered code.
From the above discussion it appears that, if L roots are specified, the resultant linear code
has r D Lm parity check symbols, as the parity check matrix has r D Lm rows. However,
not all rows of the matrix are necessarily independent; therefore the actual number of parity
check symbols may be less than Lm.
We now show that if n is properly chosen, the resultant codes are cyclic codes. Let
m j .x/ be the minimum function of Þ j , j D 1; 2; : : : ; L, where Þ j 2 G F.q m / and m j .x/
has coefficients from G F.q/. For Property 3 on page 858, every code polynomial c.x/
must be divisible by m 1 .x/; m 2 .x/; : : : ; m L .x/, and is thus divisible by the least common
multiple of such minimum functions, l:c:m:.m 1 .x/; m 2 .x/; : : : ; m L .x//. If we define
then all multiples of g.x/ are code words. In particular from Definition 11.12 the code is
cyclic if
x n 1 D g.x/ h.x/ (11.164)
n D l:c:m:.`1 ; `2 ; : : : ; ` L / (11.165)
From the properties of the minimum function (see Property 2, page 861), we know that
g.x/ divides x n 1; thus the code is cyclic if n is chosen as indicated by (11.165). We
note that
r D deg.g.x// m L (11.166)
as deg.m i .x// m. We see that r is equal to m L if all minimum functions are distinct and
are of degree m; conversely, r < m L if any minimum function has degree less than m or
if two or more minimum functions are identical.
Example 11.2.27
Choose q D 2 and let Þ be a primitive element of G F.24 /; furthermore let the code
polynomials have as roots the elements Þ, Þ 2 , Þ 3 , Þ 4 . To derive the minimum functions of
the chosen elements we look up for example the Appendix C of [3], where such functions
are listed. Minimum functions and orders of elements chosen for this example are given in
Table 11.15.
Then
g.x/ D .x 4 C x C 1/ .x 4 C x 3 C x 2 C x C 1/
(11.167)
n D l:c:m:.15; 15; 5; 15/ D 15
H D 5.
The resultant code is therefore a (15,7) code; later we will show that dmin
Þ x4 C x C 1 15
Þ2 x4 C x C 1 15
Þ3 x4 C x3 C x2 C x C 1 5
Þ4 x4 C x C 1 15
11.2. Block codes 881
The basic mathematical fact required to prove the error correcting capability of BCH
codes is that if Þ1 ; Þ2 ; : : : ; Þr are elements from any field, the determinant of the Vander-
monde matrix, given by þ þ
þ 1 1 ::: 1 þ
þ þ
þ Þ1 Þ2 : : : Þr þ
þ 2 þ
þ Þ22 : : : Þr2 þþ
det þ Þ1 (11.168)
þ :: :: þ
þ : : þþ
þ
þ Þr 1 Þr 1 : : : Þ r 1 þ
1 2 r
so that D D P.Þ1 /. Now, P.x/ is a polynomial of degree at most r 1 whose zeros are
x D Þ2 ; x D Þ3 ; : : : ; x D Þr , because if x D Þi , i D 2; 3; : : : ; r, the determinant D is equal
to zero as two columns of the matrix are identical. Thus
and
D D P.Þ1 / D k1 .Þ1 Þ2 /.Þ1 Þ3 / : : : .Þ1 Þr / (11.172)
Proceeding we find
and therefore
Y
r r .r C1/ Y
r
D D .1/r C.r 1/CÐÐÐC2C1 .Þi Þ j / D .1/ 2 .Þi Þ j / (11.175)
i; j D 1 i; j D 1
i< j i< j
Theorem 11.2
Consider a code with symbols from G F.q/, whose code polynomials have as zeros the
elements Þ m 0 ; Þ m 0 C1 ; : : : ; Þ m 0 Cd2 , where Þ is any element from G F.q m / and m 0 is any
integer. Then the resultant .n; k/ cyclic code has the following properties:
H ½ d if the elements Þ m 0 ; Þ m 0 C1 ; : : : ; Þ m 0 Cd2 , are
a) it has minimum distance dmin
distinct;
l m
b) n k .d 1/m; if q D 2 and m 0 D 1, then n k d1
2 m;
Proof. The proof of part d) has already been given (see (11.163)); the proof of part b) then
follows by noting that each minimum function is at most of degree m, and there are at
most .d 1/ distinct minimum functions. If q D 2 and m 0 D 1, the minimum function of
Þ raised to an even power, for example Þ 2i , is the same aslthe minimum m function of Þ i
d1
(see Property 2 on page 859), therefore there are at most 2 m distinct minimum
functions.
To prove part c) note that, if d D 2, we have only the root Þ m 0 , so that n is equal to the
order of Þ m 0 . If there is more than one root, then n must be the least common multiple of
the order of the roots. If Þ m 0 and Þ m 0 C1 are both roots, then .Þ m 0 /n D 1 and .Þ m 0 C1 /n D 1,
so that Þ n D 1; thus n is a multiple of the order of Þ. On the other hand, if ` is the order
of Þ, .Þ m 0 Ci /` D .Þ ` /m 0 Ci D 1m 0 Ci D 1; therefore ` is a multiple of the order of every
root. Then n is the least common multiple of numbers all of which divide `, and therefore
n `; thus n D `.
11.2. Block codes 883
Finally we prove part a). We note that the code words must satisfy the condition
2 3
2 3 c0 2 3
1 Þm0 .Þ m 0 /2 ::: .Þ m 0 /n1 6 7 0
6 1 Þ m 0 C1 c1
6 .Þ m 0 C1 /2 : : : .Þ m 0 C1 /n1 7 6 7 6 7
7 6 c2 7 6 0 7
6 :: :: 76 7D6 : 7 (11.176)
4 : : 5 6 :: 7 4 :: 5
4 : 5
1 Þ m 0 Cd2 .Þ m 0 Cd2 /2 : : : .Þ m 0 Cd2 /n1 0
cn1
We now show that no linear combination of .d 1/ or fewer columns is equal to 0. We
do this by showing that the determinant of any set of .d 1/ columns is non-zero. Choose
columns j1 ; j2 ; : : : ; jd1 ; then
þ þ
þ .Þ m 0 / j1 .Þ m 0 / j2 ::: .Þ m 0 / jd1 þþ
þ
þ .Þ m 0 C1 / j1 .Þ m 0 C1 / j2 : : : .Þ m 0 C1 / jd1 þþ
þ
det þ :: :: þ (11.177)
þ : : þ
þ þ
þ .Þ m 0 Cd2 / j1 .Þ m 0 Cd2 / j2 : : : .Þ m 0 Cd2 / jd1 þ
þ þ
þ
þ 1 1 ::: 1 þ
þ
þ Þ j1 Þ j2 ::: Þ j d1 þ
m 0 . j1 C j2 CÐÐÐC jd1 / þ þ
DÞ det þ :: :: þ (11.178)
þ : : þ
þ þ
þ .Þ j1 /d2 .Þ j2 /d2 : : : .Þ jd1 /d2 þ
.d1/d Y
d1
D Þ m 0 . j1 C j2 CÐÐÐC jd1 / .1/ 2 .Þ ji Þ jk / 6D 0 (11.179)
i; k D 1
i <k
Note that we have proven that .d 1/ columns of H are linearly independent even if they
are multiplied by elements from G F.q m /. All that would have been required was to show
linear independence if the multipliers are from G F.q/.
Þ 1 Þ 2 Þ 4 Þ 8 Þ 16 Þ 32 x6 C x C 1
Þ 3 Þ 6 Þ 12 Þ 24 Þ 48 Þ 33 x6 C x4 C x2 C x C 1
Þ 5 Þ 10 Þ 20 Þ 40 Þ 17 Þ 34 x6 C x5 C x2 C x C 1
Þ 7 Þ 14 Þ 28 Þ 56 Þ 49 Þ 35 x6 C x3 C 1
Þ 9 Þ 18 Þ 36 x3 C x2 C 1
Þ 11 Þ 22 Þ 44 Þ 25 Þ 50 Þ 37 x6 C x5 C x3 C x2 C 1
Þ 13 Þ 26 Þ 52 Þ 41 Þ 19 Þ 38 x6 C x4 C x3 C x C 1
Þ 15 Þ 30 Þ 60 Þ 57 Þ 51 Þ 39 x6 C x5 C x4 C x2 C 1
Þ 21 Þ 42 x2 C x C 1
Þ 23 Þ 46 Þ 29 Þ 58 Þ 53 Þ 43 x6 C x5 C x4 C x C 1
Þ 27 Þ 54 Þ 45 x3 C x C 1
Þ 31 Þ 62 Þ 61 Þ 59 Þ 55 Þ 47 x6 C x5 C 1
Example 11.2.28
Consider binary BCH codes of length 63, that is q D 2 and m D 6. To get a code with design
distance d we choose as roots Þ; Þ 2 ; Þ 3 ; : : : ; Þ d1 , where Þ is a primitive element from
G F.26 /. Using Table 11.11 on page 861, we get the minimum functions of the elements
from G F.26 / given in Table 11.16.
Then the roots and generator polynomials for different values of d are given in
Table 11.17; the parameters of the relative codes are given in Table 11.18.
Example 11.2.29
Let q D 2, m D 6, and choose as roots Þ, Þ 2 , Þ 3 , Þ 4 , with Þ D þ 3 , where þ is a primitive
m
element of G F.26 /; then n D 2 c1 D 63 H
3 D 21, dmin ½ d D 5, and
Example 11.2.30
Let q D 2, m D 4, and choose as roots Þ, Þ 2 , Þ 3 , Þ 4 , with Þ primitive element of G F.24 /;
H ½ 5, and g.x/ D .x 4 Cx C1/.x 4 Cx 3 Cx 2 Cx C1/.
then a (15,5) code is obtained having dmin
11.2. Block codes 885
3 Þ Þ2 .x 6 C x C 1/ D g3 .x/
5 Þ Þ2 Þ3 Þ4 .x 6 C x C 1/.x 6 C x 4 C x 2 C x C 1/ D g5 .x/
7 Þ Þ2 : : : Þ6 .x 6 C x 5 C x 2 C x C 1/ g5 .x/ D g7 .x/
9 Þ Þ2 : : : Þ8 .x 6 C x 3 C 1/ g7 .x/ D g9 .x/
11 Þ Þ 2 : : : Þ 10 .x 3 C x 2 C 1/ g9 .x/ D g11 .x/
13 Þ Þ 2 : : : Þ 12 .x 6 C x 5 C x 3 C x 2 C 1/ g11 .x/ D g13 .x/
15 Þ Þ 2 : : : Þ 14 .x 6 C x 4 C x 3 C x C 1/ g13 .x/ D g15 .x/
21 Þ Þ 2 : : : Þ 20 .x 6 C x 5 C x 4 C x 2 C 1/ g15 .x/ D g21 .x/
23 Þ Þ 2 : : : Þ 22 .x 2 C x C 1/ g21 .x/ D g23 .x/
27 Þ Þ 2 : : : Þ 26 .x 6 C x 5 C x 4 C x C 1/ g23 .x/ D g27 .x/
31 Þ Þ 2 : : : Þ 30 .x 3 C x C 1/ g27 .x/ D g31 .x/
k 57 51 45 39 36 30 24 18 16 10 7
d 3 5 7 9 11 13 15 21 23 27 31
t 1 2 3 4 5 6 7 10 11 13 15
Example 11.2.31
Let q D 2, m D 4, and choose as roots Þ, Þ 2 , Þ 3 , Þ 4 , Þ 5 , Þ 6 , with Þ primitive element of
G F.24 /; then a (15,5) code is obtained having dminH ½ 7, and g.x/ D .x 4 C x C 1/.x 4 C
3 2 2
x C x C x C 1/.x C x C 1/.
Reed–Solomon codes
Reed–Solomon codes represent a particular case of BCH codes obtained by choosing m D 1;
in other words, the field G F.q/ and the extension field G F.q m / coincide. Choosing Þ as
a primitive element of (11.180) we get
n D qm 1 D q 1 (11.184)
Note that the minimum function with coefficients in G F.q/ of an element Þ i from G F.q/ is
m Þi .x/ D .x Þ i / (11.185)
886 Chapter 11. Channel codes
so that r D .d 1/; the block length n is given by the order of Þ. In this case we show
H D d; in fact, for any code we have d H r C 1, as we can always choose a code
that dmin min
word with every message symbol but one equal to zero and therefore its weight is at most
H d, but from the BCH theorem we know
equal to r C 1; from this it follows that dmin
H
that dmin ½ d.
Example 11.2.32
Choose Þ as a primitive element of G F.25 /, and choose the roots Þ, Þ 2 , Þ 3 , Þ 4 , Þ 5 , and
Þ 6 ; then the resultant (31,25) code has dmin
H D 7, g.x/ D .x Þ/.x Þ 2 /.x Þ 3 /.x
Þ /.x Þ /.x Þ /, and the symbols of the code words are from G F.25 /.
4 5 6
Observation 11.2
The encoding of Reed–Solomon codes can be done by the devices of Figure 11.3 or
Figure 11.4, where the operations are in G F.q/. In Table 11.19 and Table 11.20 we give,
respectively, the tables of additions and multiplications between elements of G F.q/ for q D
C 0 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14
0 0 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14
Þ0 Þ0 0 Þ 4 Þ 8 Þ 14 Þ 1 Þ 10 Þ 13 Þ 9 Þ 2 Þ 7 Þ 5 Þ 12 Þ 11 Þ 6 Þ 3
Þ1 Þ1 Þ4 0 Þ 5 Þ 9 Þ 0 Þ 2 Þ 11 Þ 14 Þ 10 Þ 3 Þ 8 Þ 6 Þ 13 Þ 12 Þ 7
Þ2 Þ2 Þ8 Þ5 0 Þ 6 Þ 10 Þ 1 Þ 3 Þ 12 Þ 0 Þ 11 Þ 4 Þ 9 Þ 7 Þ 14 Þ 13
Þ 3 Þ 3 Þ 14 Þ 9 Þ 6 0 Þ 7 Þ 11 Þ 2 Þ 4 Þ 13 Þ 1 Þ 12 Þ 5 Þ 10 Þ 8 Þ 0
Þ 4 Þ 4 Þ 1 Þ 0 Þ 10 Þ 7 0 Þ 8 Þ 12 Þ 3 Þ 5 Þ 14 Þ 2 Þ 13 Þ 6 Þ 11 Þ 9
Þ 5 Þ 5 Þ 10 Þ 2 Þ 1 Þ 11 Þ 8 0 Þ 9 Þ 13 Þ 4 Þ 6 Þ 0 Þ 3 Þ 14 Þ 7 Þ 12
Þ 6 Þ 6 Þ 13 Þ 11 Þ 3 Þ 2 Þ 12 Þ 9 0 Þ 10 Þ 14 Þ 5 Þ 7 Þ 1 Þ 4 Þ 0 Þ 8
Þ7 Þ7 Þ9 Þ 14 Þ 12 Þ4 Þ3 Þ 13 Þ 10 0 Þ 11 Þ 0 Þ 6 Þ 8 Þ 2 Þ 5 Þ 1
Þ 8 Þ 8 Þ 2 Þ 10 Þ 0 Þ 13 Þ 5 Þ 4 Þ 14 Þ 11 0 Þ 12 Þ 1 Þ 7 Þ 9 Þ 3 Þ 6
Þ 9 Þ 9 Þ 7 Þ 3 Þ 11 Þ 1 Þ 14 Þ 6 Þ 5 Þ 0 Þ 12 0 Þ 13 Þ 2 Þ 8 Þ 10 Þ 4
Þ 10 Þ 10 Þ 5 Þ 8 Þ 4 Þ 12 Þ 2 Þ 0 Þ 7 Þ 6 Þ 1 Þ 13 0 Þ 14 Þ 3 Þ 9 Þ 11
Þ 11 Þ 11 Þ 12 Þ 6 Þ 9 Þ 5 Þ 13 Þ 3 Þ 1 Þ 8 Þ 7 Þ 2 Þ 14 0 Þ 0 Þ 4 Þ 10
Þ 12 Þ 12 Þ 11 Þ 13 Þ7 Þ 10 Þ6 Þ 14 Þ4 Þ2 Þ9 Þ8 Þ3 Þ0 0 Þ1 Þ5
Þ 13 Þ 13 Þ 6 Þ 12 Þ 14 Þ 8 Þ 11 Þ 7 Þ 0 Þ 5 Þ 3 Þ 10 Þ 9 Þ 4 Þ 1 0 Þ2
Þ 14 Þ 14 Þ 3 Þ 7 Þ 13 Þ 0 Þ 9 Þ 12 Þ 8 Þ 1 Þ 6 Þ 4 Þ 11 Þ 10 Þ 5 Þ 2 0
11.2. Block codes 887
Ð 0 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Þ0 0 Þ0 Þ1 Þ2 Þ3 Þ4 Þ5 Þ6 Þ7 Þ8 Þ9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14
Þ 1 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0
Þ 2 0 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1
Þ 3 0 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2
Þ 4 0 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3
Þ 5 0 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4
Þ 6 0 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5
Þ 7 0 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6
Þ 8 0 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7
Þ 9 0 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8
Þ 10 0 Þ 10 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9
Þ 11 0 Þ 11 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10
Þ 12 0 Þ 12 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11
Þ 13 0 Þ 13 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12
Þ 14 0 Þ 14 Þ 0 Þ 1 Þ 2 Þ 3 Þ 4 Þ 5 Þ 6 Þ 7 Þ 8 Þ 9 Þ 10 Þ 11 Þ 12 Þ 13
0 0 0000
Þ0 1 1000
Þ1 x 0100
Þ2 x2 0010
Þ3 x3 0001
Þ4 1Cx 1100
Þ5 xC x2 0110
Þ6 x2 C x3 0011
Þ7 1Cx C x3 1101
Þ8 1C x2 1010
Þ9 x C x3 0101
Þ 10 1Cx C x2 1110
Þ 11 xC x2 C x3 0111
Þ 12 1Cx C x2 C x3 1111
Þ 13 1C x2 C x3 1011
Þ 14 1 C x3 1001
we obtain
2 3
sm 0
6 sm 0 C1 7
6 7
6 7
Hz D He D s D 6 sm 0 C2 7 (11.188)
6 :: 7
4 : 5
sm 0 Cd2
where
X
n1
sj D e` .Þ j /` j D m 0 ; m 0 C 1; : : : ; m 0 C d 2 (11.189)
`D0
where the coefficients "i are elements from G F.q m / that represent the values of the errors,
and the coefficients ¾i are elements from G F.q m / that give the positions of the errors. In other
words, if ¾i D Þ ` then an error has occurred in position `, where ` 2 f0; 1; 2; : : : ; n 1g.
The idea of a decoding algorithm is to solve the set of non-linear equations for the unknowns
"i and ¾i ; then there are 2¹ unknowns and d 1 equations.
We show that it is possible to solve this set of equations if 2¹ d 1, assuming m 0 D 1;
in the case m 0 6D 1, the decoding procedure does not change.
Consider the polynomial in x, also called error indicator polynomial,
½.x/ D ½¹ x ¹ C ½¹1 x ¹1 C Ð Ð Ð C ½1 x C 1 (11.191)
defined as the polynomial that has as zeros the inverse of the elements that locate the
positions of the errors, that is ¾i1 , i D 1; : : : ; ¹. Then
½.x/ D .1 x¾1 /.1 x¾2 / : : : .1 x¾¹ / (11.192)
If the coefficients of ½.x/ are known, it is possible to find the zeros of ½.x/, and thus
determine the positions of the errors.
The first step of the decoding procedure consists in evaluating the coefficients ½1 ; : : : ; ½¹
jC¹
using the syndromes (11.190). We multiply both sides of (11.191) by "i ¾i and evaluate
the expression found for x D ¾i1 , obtaining
jC¹
0 D "i ¾i .1 C ½1 ¾i1 C ½2 ¾i2 C Ð Ð Ð C ½¹ ¾i¹ / (11.193)
which can be written as
jC¹ jC¹1 jC¹2 j
"i .¾i C ½1 ¾i C ½2 ¾i C Ð Ð Ð C ½¹ ¾i / D 0 (11.194)
(11.194) holds for i D 1; : : : ; ¹, and for every value of j. Adding these equations for
i D 1; : : : ; ¹, we get
¹
X jC¹ jC¹1 jC¹2 j
"i .¾i C ½1 ¾i C ½2 ¾i C Ð Ð Ð C ½¹ ¾i / D 0 for every j (11.195)
i D1
or equivalently
¹
X ¹
X ¹
X ¹
X
jC¹ jC¹1 jC¹2 j
"i ¾i C ½1 "i ¾i C ½2 "i ¾i C Ð Ð Ð C ½¹ "i ¾i D 0 for every j
i D1 i D1 i D1 i D1
(11.196)
As ¹ .d 1/=2, if 1 j ¹ the equations in (11.196) are equal to the syndromes
(11.190); therefore we obtain
½1 s jC¹1 C ½2 s jC¹2 C Ð Ð Ð C ½¹ s j D s jC¹ j D 1; : : : ; ¹ (11.197)
(11.197) is a system of linear equations that can be written in the form
2 32 3 2 3
s1 s2 s3 : : : s¹1 s¹ ½¹ s¹C1
6 s2 s3 s4 : : : s ¹ s¹C1 7 6 7 6 s¹C2 7
6 7 6 ½¹1 7 6 7
6 :: :: :: 7 6 :: 7 D 6 :: 7 (11.198)
4 : : : 54 : 5 4 : 5
s¹ s¹C1 s¹C2 : : : s2¹2 s2¹1 ½1 s2¹
890 Chapter 11. Channel codes
For a given λ, (11.201) is the equation of a recursive filter, which can be implemented by a
shift register with feedback, whose coefficients are given by λ, as illustrated in Figure 11.8.
The solution of (11.198) is thus equivalent to the problem of finding the shift register with
feedback of minimum length that, if suitably initialized, yields the sequence of syndromes.
This will identify the polynomial ½.x/ of minimum degree ¹, that we recall exists and is
unique, as the ¹ ð ¹ matrix of the original problem admits the inverse.
The Berlekamp–Massey algorithm to find the recursive filter can be applied in any
field and does not make use of the particular properties of the sequence of syndromes
s1 ; s2 ; : : : ; sd1 . To determine the recursive filter we must find two quantities, that we
denote as .L ; ½.x//, where L is the length of the shift register and ½.x/ is the polynomial
whose degree ¹ must satisfy the condition ¹ L. The algorithm is inductive, that is for each
r, starting from r D 1, we determine a shift register that generates the first r syndromes.
−λ 1 −λ 2 −λ υ
Tc Tc Tc
sj sj−1 s j− υ
The shift register identified by .L r ; ½.r / .x// will then be a shift register of minimum length
that generates the sequence s1 ; : : : ; sr .
(
1 if 1r 6D 0 and 2L r 1 r 1
Žr D (11.203)
0 otherwise
L r D Žr .r L r 1 / C .1 Žr / L r 1 (11.204)
½ ½ ½
½.r / .x/ 1 1r x ½.r 1/ .x/
D (11.205)
þ .r / .x/ 1r1 Žr .1 Žr / x þ .r 1/ .x/
½0.d1/ D 1 (11.206)
and
X
n1
sr D ½.d1/
j sr j r D L d1 C 1; : : : ; d 1 (11.207)
jD1
Note that 1r can be zero only if Žr D 0; in this case we assign to 1r1 Ð Žr the value zero.
Moreover, we see that the algorithm requires a complexity of the order of d 2 operations,
against a complexity of the order of d 3 operations needed by the matrix inversion in
(11.198). To prove that the polynomial ½.d1/ .x/ given by the algorithm is indeed the
polynomial of minimum degree with ½0.d1/ D 1 that satisfies (11.207), we use the following
two lemmas [2].
In Lemma 1 we find the relation between the lengths of the shift registers of minimum
length obtained in two consecutive iterations, L r and L r 1 . In Lemma 2 we use the algo-
rithm to construct a shift register that generates s1 ; : : : ; sr starting from the shift register of
minimum length that generates s1 ; : : : ; sr 1 . We will conclude that the construction yields
the shift register of minimum length since it satisfies Lemma 1.
Lemma 1. We assume that .L r 1 ; ½.r 1/ .x// is the shift register of minimum length that
generates s1 ; : : : ; sr 1 , while .L r ; ½.r / .x// is the shift register of minimum length that
generates s1 ; : : : ; sr 1 ; sr , and ½.r / .x/ 6D ½.r 1/ .x/; then
L r ½ max.L r 1 ; r L r 1 / (11.208)
11.2. Block codes 893
Proof. The inequality (11.208) is the combination of the two inequalities L r ½ L r 1 and
L r ½ r L r 1 . The first inequality is obvious, because if a shift register generates a certain
sequence it must also generate any initial part of this sequence; the second inequality is
obvious if L r 1 ½ r, because L r is a non-negative quantity. Thus we assume L r 1 < r,
and suppose that the second inequality is not satisfied; then L r r 1 L r 1 , or r ½
L r 1 C L r C 1. By assumption we have
8 LX
>
>
r 1
>
>
< s j D ½i.r 1/ s ji j D L r 1 C 1; : : : ; r 1
i D1 (11.209)
> LX r 1
>
> s 6D .r 1/
>
: r ½i sr i
i D1
and
Lr
X
sj D ½.r /
k s jk j D L r C 1; : : : ; r (11.210)
kD1
We observe that
Lr
X Lr
X LX
r 1
sr D ½.r /
k sr k D ½.r
k
/
½i.r 1/ sr ki (11.211)
kD1 kD1 i D1
L r D max.L r 1 ; r L r 1 / (11.213)
and every shift register that generates s1 ; : : : ; sr , and has a length that satisfies (11.213),
is a shift register of minimum length. The Berlekamp–Massey algorithm yields this shift
register.
Proof. From Lemma 1, L r cannot be smaller than the right-hand side of (11.213); thus,
if we construct a shift register that yields the given sequence and whose length satisfies
894 Chapter 11. Channel codes
(11.213), then it must be a shift register of minimum length. The proof is obtained by
induction.
We construct a shift register that satisfies the Lemma at the r-th iteration, assuming that
shift registers were iteratively constructed for each value of the index k, with k r 1.
For each k, k D 1; : : : ; r 1, let .L k ; ½.k/ .x// be the shift register of minimum length that
generates s1 ; : : : ; sk . We assume that
If 1r D 0, then the shift register .L r 1 ; ½.r 1/ .x// also generates the first r symbols of
the sequence, hence
(
L r D L r 1
(11.217)
½.r / .x/ D ½.r 1/ .x/
If 1r 6D 0, then it is necessary to find a new shift register. Recall from (11.215) that
there was a variation in the length of the shift register for k D m; therefore
LX
(
m1
.m1/ 0 j D L m1 ; : : : ; m 1
sj C ½i s ji D (11.218)
i D1
1 m D
6 0 j Dm
and by induction,
and let L r D deg.½.r / .x//. Then, as deg.½.r 1/ .x// L r 1 , and deg[x r m ½.m1/ .x/]
r m C L m1 , we obtain
L r D max.L r 1 ; r L r 1 / (11.222)
11.2. Block codes 895
It remains to prove that the shift register .L r ; ½.r / .x// generates the given sequence. By
direct computation we obtain
!
XLr
.r /
sj ½i s ji
i D1
LX
" LX
#
r 1 m1
.L /
D sj C ½i.r 1/ s ji 1r 11
m s jr Cm C ½i m1 s jr Cmi (11.223)
i D1 i D1
(
0 j D L r ; L r C 1; : : : ; r 1
D
1r 1r 11
m 1m D0 j Dr
i D1 jD1 `D1
` 6D i
896 Chapter 11. Channel codes
By inspection we see that the term within brackets is equal to .1 ¾id1 x d1 /; thus
¹
X ¹
Y
!.x/ D "i ¾i x.1 ¾id1 x d1 / .1 ¾` x/ mod x d1 (11.229)
i D1 `D1
` 6D i
!.¾`1 / !.¾`1 /
"` D ¹ D (11.231)
Y ¾`1 ½0 .¾`1 /
.1 ¾ j ¾`1 /
j D1
j 6D `
which proves the first part of equality (11.231). Moreover, from (11.230), we have
¹
Y
½0 .¾`1 / D ¾` .1 ¾ j ¾`1 / (11.233)
j D1
j 6D `
Example 11.2.33 (Reed–Solomon (15,9) code with d D 7 (t D 3), and elements from G F.24 /)
From (11.186), using Table 11.19 and Table 11.20, the generator polynomial is given by
Suppose that the code polynomial c.x/ D 0 is transmitted, and that the received poly-
nomial is
z.x/ D Þx 7 C Þ 5 x 5 C Þ 11 x 2 (11.235)
11.2. Block codes 897
In this case e.x/ D z.x/. From (11.189), using Table 11.19 and Table 11.20, the
syndromes are
s1 D ÞÞ 7 C Þ 5 Þ 5 C Þ 11 Þ 2 D Þ 12
s2 D ÞÞ 14 C Þ 5 Þ 10 C Þ 11 Þ 4 D 1
s3 D ÞÞ 21 C Þ 5 Þ 15 C Þ 11 Þ 6 D Þ 14
(11.236)
s4 D ÞÞ 28 C Þ 5 Þ 20 C Þ 11 Þ 8 D Þ 13
s5 D ÞÞ 35 C Þ 5 Þ 25 C Þ 11 Þ 10 D 1
s6 D ÞÞ 42 C Þ 5 Þ 30 C Þ 11 Þ 12 D Þ 11
½.0/ .x/ D 1
þ .0/ .x/ D 1 (11.237)
L0 D 0
Step 6 (r D 6): 16 D Þ 11 C Þ 14 C Þ 11 Þ 13 C Þ 14 Þ 14 D 0, Ž6 D 0, L 6 D L 5 D 3.
.6/ ½ ½ ½
½ .x/ 1 0 1 C Þ 14 x C Þ 11 x 2 C Þ 14 x 3
D
þ .6/ .x/ 0 x Þ4 C Þ3 x
½
1 C Þ 14 x C Þ 11 x 2 C Þ 14 x 3
D
Þ4 x C Þ3 x 2
The error indicator polynomial is ½.x/ D ½.6/ .x/. By using the exhaustive method to find
the three roots, we obtain
½0 .x/ D Þ 14 C Þ 14 x 2 (11.239)
!.x/ D .Þ 12 x C x 2 C Þ 14 x 3 C Þ 13 x 4 C x 5 C Þ 11 x 6 /
.1 C Þ 14 x C Þ 11 x 2 C Þ 14 x 3 / mod x 6 (11.240)
D Þ 12 x C Þ 12 x 2 C Þ8 x 3
An alternative approach for the encoding and decoding of Reed–Solomon codes utilizes
the concept of Fourier transform on a Galois field [2, 5]. Let Þ be a primitive element
of the field G F.q/. The Fourier transform on the field G F.q/ (GFFT) of a vector c D
.c0 ; c1 ; : : : ; cn1 / of n bits is defined as .C0 ; C1 ; : : : ; C n1 /, where
X
n1
Cj D ci Þ i j j D 0; : : : ; n 1 (11.242)
i D0
11.2. Block codes 899
Let us consider a code word c of n bits in the “time domain” from a Reed–Solomon
cyclic code that corrects up to t errors; then c corresponds to a code polynomial that has as
roots 2t D d 1 consecutive powers of Þ. If we take the GFFT of this word, we find that in
the “frequency domain” the transform has 2t consecutive components equal to zero. Indeed
from (11.176), specialized to Reed–Solomon codes, and from (11.242), we can show that
the two conditions are equivalent, that is a polynomial has 2t consecutive powers of Þ
as roots if and only if the transform has 2t consecutive components equal to zero. The
approach that resorts to the GFFT is therefore the mirror of the approach that uses the
generator polynomial. This observation leads to the development of efficient methods for
encoding and decoding.
.dec/ 2t C 1
Pbit ' Pw (11.245)
n
Example 11.2.34
H D 5 (see page 839), decoding with hard input yields
For a (5,1) repetition code with dmin
5 5
Pw D 3
Pbit .1 Pbit /2 C P 4 .1 Pbit / C Pbit
5
(11.246)
3 4 bit
900 Chapter 11. Channel codes
Example 11.2.35
For an .n; k/ Hamming code with dmin
H D 3 (see page 839), (11.243) yields
n
X n
Pw i
Pbit .1 Pbit /ni
i D2
i (11.247)
D 1 [.1 Pbit /n C n Pbit .1 Pbit /n1 ]
For example, for a (15,11) code, if Pbit D 103 then Pw ' 104 , and from (11.245) we
.dec/
get Pbit ' 2 105 .
The decoders that have been considered so far are classified as hard input decoders, as
the demodulator output is quantized to the values of the coded symbols before decoding.
In general, other decoding algorithms with soft input may be considered, that directly
process the demodulated signal, and consequently the decoder input is real valued (see
Section 11.3.2).
In the case of antipodal binary signals and soft input decoding we obtain (see also
Section 6.8 on page 496)
0s 1
R d H 2E
c min b
Pw ' .2k 1/ Q @ A (11.248)
N0
Example 11.3.1
Consider a rate 1=2 binary convolutional code, obtained by the encoder illustrated in
Figure 11.9a. For each bit bk that enters the encoder, two output bits, ck.1/ and ck.2/ ,
are transmitted. The first output ck.1/ is obtained if the switch at the output is in the
upper position, and the second output ck.2/ is obtained if the switch is in the lower po-
sition; the two previous input bits, bk1 and bk2 , are stored in the memory of the
encoder. As the information bit is not presented directly to one of the outputs, we say
that the code is nonsystematic. The two coded bits are generated as linear combinations
of the bits of the message; denoting the input sequence as : : : ; b0 ; b1 ; b2 ; b3 ; : : : , and
the output sequence as : : : ; c0.1/ ; c0.2/ ; c1.1/ ; c1.2/ ; c2.1/ ; c2.2/ ; c3.1/ ; c3.2/ ; : : : , then the following
11.3. Convolutional codes 901
ck(1)
bk
D D
(2)
ck
(a)
10 (a)
10 (a)
01 (b)
01 (a) 00 (c)
01 (b)
11 (d)
11 (c) 01 (a)
00 (c)
10 (b)
10 (b) 11 (c)
1
11 (d)
00 (d)
(d) 10 (a)
01 (a)
0 01 (b)
11 (c) 00 (c)
10 (b)
11 (d)
00 (d) 01 (a)
11 (c)
10 (b)
00 (d) 11 (c)
00 (d)
00 (d)
(b)
Figure 11.9. (a) Encoder and (b) tree diagram for the convolutional code of Example 11.3.1.
902 Chapter 11. Channel codes
relations hold:
A convolutional code may be described in terms of a tree, trellis, or state diagram; for
the code defined by (11.249) these descriptions are illustrated in Figures 11.9b, 11.10a, and
11.10b, respectively.
With reference to the tree diagram of Figure 11.9b, we begin at the left (root) node and
proceed to the right by choosing an upper path if the input bit is equal to 1 and a lower
path if the input bit is 0. We output the two bits represented by the label on the branch
that takes us to the next node, and then repeat this process at the next node. The nodes or
(a)
(b)
Figure 11.10. (a) Trellis diagram and (b) state diagram for the convolutional code of
Example 11.3.1.
11.3. Convolutional codes 903
states of the encoder are labeled with the letters a, b, c, and d, which indicate the relation
with the four possible values assumed by the two bits stored in the encoder, according to
the table:
b.D/ D b0 C b1 D C b2 D 2 C b3 D 3 C Ð Ð Ð
c.1/ .D/ D c0.1/ C c1.1/ D C c2.1/ D 2 C c3.1/ D 3 C Ð Ð Ð D g .1;1/ .D/ b.D/ (11.251)
Let c.1/ .D/; c.2/ .D/; : : : ; c.n 0 / .D/ be the D transforms of the n 0 output sequences; then
k0
X
c. j/ .D/ D g . j;i / .D/ b.i / .D/ j D 1; 2; : : : ; n 0 (11.254)
i D1
Therefore the encoder of a convolutional code must store ¹ previous blocks of k0 message
symbols to form a block of n 0 output symbols.
The general structure of an encoder for a code with k0 D 1 and n 0 D 2 is illustrated
in Figure 11.11; for such an encoder, ¹k0 storage elements are necessary. If the code is
systematic, then the encoder can be implemented with ¹.n 0 k0 / storage elements, as
illustrated in Figure 11.12 for k0 D 2 and n 0 D 3.
If we interpret the sequence fck g as the output of a sequential finite-state machine (see
Appendix 8.D), at instant k the trellis of a nonsystematic code is defined by the three
signals:
1. Input [bk.1/ ; bk.2/ ; : : : ; bk.k0 / ] (11.257)
.k / .k0 /
2. State [bk.1/ ; : : : ; bk 0 ; : : : ; bk.¹1/
.1/
; : : : ; bk.¹1/ ] (11.258)
. j/
where ck ; j D 1; : : : ; n 0 , is given by (11.254). Then there are q k0 ¹ states in the trellis.
There are q k0 branches departing from each state and q k0 branches merging into a state.
The output vector consists of n 0 q–ary symbols.
c (1)(D)
(1)
b (D) D D D D
c (2)(D)
Figure 11.11. Block diagram of an encoder for a convolutional code with k0 D 1, n0 D 2, and
constraint length ¹.
(1)
b (D)
c (1) (D)
(2)
b (D)
c (2) (D)
D D D c (3)(D)
Figure 11.12. Block diagram of an encoder for a systematic convolutional code with k0 D 2,
n0 D 3, and constraint length ¹.
Generator matrix
From (11.260), we introduce the matrices
2 .1;1/ .n ;1/ 3
gi gi.2;1/ : : : gi 0
gi D 4 ::: ::
6 7
: 5 i D 0; : : : ; ¹ (11.264)
.1;k0 / .2;k0 / .n 0 ;k0 /
gi gi : : : gi
Hence the generator matrix is of the form
2 3
g0 g1 : : : g¹ 0 :::
G1 D 4 0 g0 : : : g¹1 g¹ : : : 5 (11.265)
::: ::: ::: ::: ::: :::
11.3. Convolutional codes 907
Some examples of convolutional codes with the corresponding encoders and generator
matrices are illustrated in Figure 11.13.
Transfer function
H , that determines the performance
An important parameter of a convolutional code is dfree
of the code (see Section 11.3.3).
Definition 11.14
Let e.D/ D [e.n 0 / .D/; : : : ; e.1/ .D/], be any error sequence between two code words
c1 .D/ D [c1.n 0 / .D/; : : : ; c1.1/ .D/] and c2 .D/ D [c2.n 0 / .D/; : : : ; c2.1/ .D/], that is c1 .D/ D
c2 .D/ C e.D/, and ek D [ek.n 0 / ; : : : ; ek.1/ ] denotes the k-th element of the sequence. We
define the free Hamming distance of the code as
X
1
H
dfree D min w.ek / (11.266)
e.D/
kD0
where w is introduced in Definition 11.4 on page 832. As the code is linear, dfree
H corresponds
to the minimum number of symbols different from zero in a non-zero code word.
Next we consider a method to compute the weights of all code words in a convolutional
code; to illustrate the method we examine the simple binary encoder of Figure 11.9a.
We begin by reproducing the trellis diagram of the code in Figure 11.14, where each
path is now labeled with the weight of the output bits corresponding to that path. We
consider all paths that diverge from state (d) and return to state (d) for the first time
after a number of steps j. By inspection, we find one such path of weight 5 returns to
state (d) after 3 steps; moreover, we find two distinct paths of weight 6, one that returns
to state (d) after 4 steps and another after 5 steps. Hence we find that this code has
H D 5.
dfree
We now look for a method that enables us to find the weights of all code words as well
as the lengths of the paths that give origin to the code words with these weights. Consider
the state diagram for this code, redrawn in Figure 11.15 with branches labeled as D 2 , D,
or D 0 D 1, where the exponent corresponds to the weight of the output bits corresponding
to that branch. Next we split node (0,0) to obtain the state diagram of Figure 11.16, and
we compute a generating function for the weights. The generating function is the transfer
function of a signal flow graph with unit input. From Figure 11.16, we obtain this transfer
function by solving the system of equations
þ D D 2 Þ C 1
D Dþ C DŽ
(11.267)
Ž D Dþ C DŽ
D D2
908 Chapter 11. Channel codes
c (1) (D)
g0 =(1,1)
(1)
b (D)
D D g1 =(1,0)
g =(1,1)
2
c (2) (D)
k 0 =1, n0 =2, ν =2
(a)
(1)
b (D) D
(1)
c (D)
g0 = 1 1 1
(2) 010
c (D)
101
(3) g1 =
c (D) 110
(2)
b (D) D
k 0 =2, n0 =3, ν =1
(b)
(1)
c (D)
(1)
b (D)
D
c
(2)
(D) 1001
g0 = 0101
b (2) (D) 0011
D
c (3) (D)
0001
b (3) (D) g1 = 0000
D 0001
c (4) (D)
k0 =3, n 0 =4, ν =1
(c)
Figure 11.14. Trellis diagram of the code of Example 11.3.1; the labels represent the Hamming
weight of the output bits.
Figure 11.15. State diagram of the code of Example 11.3.1; the labels represent the Hamming
weight of the generated bits.
Figure 11.16. State diagram of the code of Example 11.3.1; node (0,0) is split to compute
the transfer function of the code.
910 Chapter 11. Channel codes
Figure 11.17. State diagram of the code of Example 11.3.1; node (0,0) is split to compute
the augmented transfer function.
Then we get
D5
t .D/ D D D D 5 C 2D 6 C 4D 7 C Ð Ð Ð C 2i D i C5 C Ð Ð Ð (11.268)
Þ 1 2D
From inspection of t .D/, we find there is one code word of weight 5, two of weight 6,
four of weight 7, : : : . Equation (11.268) holds for code words of infinite length.
If we want to find code words that return to state (d) after j steps we refer to the state
diagram of Figure 11.17. The term L introduced in the label on each branch allows to
keep track of the length of the sequence, as the power of L is augmented by 1 every time
a transition occurs. Furthermore, we introduce the term I in the label on a branch if the
corresponding transition is due to an information bit equal to 1; this allows computation
for each path on the trellis diagram of the corresponding number of information bits equal
to 1. The augmented transfer function is given by
D5 L 3 I
t .D; L ; I / D
1 D L.1 C L/I
(11.269)
D D 5 L 3 I C D 6 L 4 .1 C L/I 2 C D 7 L 5 .1 C L/2 I 3 C Ð Ð Ð
C D 5Ci L 3Ci .1 C L/i I 1Ci C Ð Ð Ð
Thus we see that the code word of weight 5 is of length 3 and is originated by a sequence
of information bits that contains one bit equal to 1, there are two code words of weight 6,
one of length 4 and the other of length 5, both of which are originated by a sequence of
information bits that contain two bits equal to 1, : : : .
Figure 11.18. (a) Encoder and (b) state diagram for a catastrophic convolutional code.
state .1; 1/ does not increase the weight of the code word, so that a code word corresponding
to a path passing through the states .0; 0/; .1; 0/; .1; 1/; .1; 1/; : : : ; .1; 1/; .0; 1/; .0; 0/ is of
weight 6, independently of the number of times it passes through the self loop at state (1,1).
In other words, long sequences of coded bits equal to zero may be obtained by remaining
in the state .0; 0/ with a sequence of information bits equal to zero, or by remaining in the
state .1; 1/ with a sequence of information bits equal to one. Therefore a limited number
of channel errors, in this case 6, can cause a large number of errors in the sequence of
decoded bits.
Definition 11.15
A convolutional code is catastrophic if there exists a closed loop in the state diagram that
has all branches with zero weight.
912 Chapter 11. Channel codes
~g (1,1) (D)
(a)
1
b (1)(D)=
gc (D)
( n0 ,1) ~g( n0 ,1)(D)
g (D)
(b)
Figure 11.19. Two distinct infinite sequences of information bits that produce the same
output sequence with a finite number of errors.
For codes with rate 1=n 0 , it has been shown that a code is catastrophic if and only if
all generator polynomials have a common polynomial factor. In the above example, the
common factor is 1 C D. This can be proved using the following argument: suppose that
g .1;1/ .D/; g .2;1/ .D/; : : : ; g .k0 ;1/ .D/ all have the common factor gc .D/, so that
Suppose the all zero sequence is sent, b.1/ .D/ D 0, and that the finite error sequence
gQ .i;1/ .D/, equal to that defined in (11.270), occurs in the i-th subsequence output, for
i D 1; 2; : : : ; n 0 , as illustrated in Figure 11.19a. The same output sequence is obtained if
the sequence of information bits with infinite length b.1/ .D/ D 1=gc .D/ is sent, and no
channel errors occur, as illustrated in Figure 11.19b. Thus a finite number of errors yields a
decoded sequence of information bits that differ from the transmitted sequence in an infinite
number of positions.
performance that is lower as compared to decoding methods based on the observation of the
whole received sequence. The latter methods, also called probabilistic decoding methods,
include the Viterbi algorithm (VA), the sequential decoding algorithm by Fano [6], and the
forward-backward algorithm by Bahl–Cocke–Jelinek–Raviv (BCJR).
Before illustrating the various decoding methods, we consider an important function.
Interleaving
The majority of block codes as well as convolutional codes are designed by assuming that
the errors introduced by the noisy channel are statistically independent. This assumption is
not always true in practice. To make the channel errors, at least approximately, statistically
independent it is customary to resort to an interleaver, which performs a permutation of the
bits of a sequence. For example, a block interleaver orders the coded bits in a matrix with
M1 rows and M2 columns. The coded bits are usually written in the matrix by row and then
read by column before being forwarded to the bit mapper. At the receiver, a deinterleaver
stores the detected bits in a matrix of the same M1 ð M2 dimensions, where the writing is
done by column and the reading by row. As a result, possible error bursts of length M1 B
are broken up into bursts of shorter length B.
Model with hard input. With reference to the transmission system of Figure 6.20, we
consider the sequence at the output of the binary channel. In this case the demodulator
has already detected the transmitted symbols, for example, by a threshold detector, and the
inverse bit mapper provides the binary sequence fz m D cQm g to the decoder, from which we
obtain the interlaced binary sequences
where the errors ek.i / 2 f0; 1g, for a memoryless binary symmetric channel, are i.i.d.
(see (6.91)).
From the description on page 904, introducing the state of the encoder at instant k as
the vector with ¹ elements
the desired sequence in (11.272), that coincides with the encoder output, can be written as
(see (11.271) and (11.273))
Model with soft input. Again with reference to Figure 6.20, at the decision point of the
receiver the signal can be written as (see (8.173))
z k D u k C wk (11.275)
where we assume wk is white Gaussian noise with variance ¦w2 D 2¦ I2 , and u k is given
by (8.174)
L2
X
uk D n akn (11.276)
nDL 1
where fak g is the sequence of symbols at the output of the bit mapper. Note that in (11.276)
the symbols fak g are in general not independent, as the input of the bit mapper is a code
sequence according to the law (11.274).
The relation between u k and the bits fb` g depends on the intersymbol interference in
(11.276), the type of bit mapper and the encoder (11.271). We consider the case of absence
of ISI, that is
u k D ak (11.277)
.1/ .2/
and a 16-PAM system where, without interleaving, four consecutive code bits c2k , c2k ,
.1/ .2/
c2k1 , c2k1 are mapped into a symbol of the constellation. For an encoder with constraint
length ¹ we have
.1/ .2/ .1/ .2/
u k D fQ.ak / D fQ[BMAP f[c2k ; c2k ; c2k1 ; c2k1 ]g]
we can write
We observe that in this example each state of the trellis admits four possible transitions.
As we will see in Chapter 12, better performance is obtained by jointly optimizing the
encoder and the bit mapper.
11.3. Convolutional codes 915
Viterbi algorithm
The Viterbi algorithm, described in Section 8.10.1, is a probabilistic decoding method that
implements the maximum likelihood criterion, which minimizes the probability of detecting
a sequence that is different from the transmitted sequence.
VA with hard input. The trellis diagram is obtained by using the definition (11.273), and
the branch metric is the Hamming distance between zk D [z k.1/ ; z k.2/ ]T and ck D [ck.1/ ; ck.2/ ]T ,
(see Definition 11.1),
VA with soft input. The trellis diagram is now obtained by using the definition (11.279),
and the branch metric is the Euclidean distance between z k and u k ,
jz k u k j2 (11.282)
where u k , in the case of the previous example of absence of ISI and 16-PAM transmission,
is given by (11.280).
As an alternative to the VA we can use the FBA of Section 8.10.2.
Forward-backward algorithm
The previous approach, which considers joint detection in the presence of ISI and con-
volutional decoding, requires a computational complexity that in many applications may
turn out to be exceedingly large. In fact, the state (11.279), that takes into account both
encoding and the presence of ISI, usually is difficult to define and is composed of several
bits of the sequence fb` g. An approximate solution is obtained by considering the detection
and the decoding problems separately, however, assuming that the detector passes the soft
information on the detected bits to the decoder.
Soft output detection by FBA. By using a trellis diagram that takes into account the ISI
introduced by the channel, the code bits fcn g are detected assuming that they are i.i.d., and
the reliability of the detection is computed (soft detection). For this purpose we use the
FBA of page 670, that determines for each state a metric Vk .i/, i D 1; : : : ; Ns .
Now, with reference to the example of the channel given by (11.277) and 16-PAM
transmission, the state is identified by sk D .ak / D [c4k ; c4k1 ; c4k2 ; c4k3 ], where6 fcn g,
cn 2 f1; 1g, is assumed to be a sequence of i.i.d. symbols. By considering the binary
state representation, and by suitably adding the values Vk .i/, we get the MAP metric, or
6 It is sometimes convenient to view the encoder output cn and/or the encoder input bn as symbols from the
alphabet f1; C1g, rather than f0; 1g. It will be clear from the context to which alphabet we refer.
916 Chapter 11. Channel codes
By the above formulation, the soft decision associated with the bit cn is given by
Observation 11.3
For binary transmission in the absence of ISI, from (8.269) on page 675, we have, apart
from a non-essential additive constant,
.z n Þ/2
`n.in/ .Þ/ D Þ 2 f1; 1g (11.286)
2¦ I2
2
`n.in/ D zn (11.287)
¦ I2
In other words, apart from a constant factor, the LLR associated with the bit cn coincides
with the demodulator output z n .
Rather than (11.284) and (11.283), we can use the Max-Log-MAP criterion (8.267) that
yields an approximate log-likelihood,
`Q.in/
4kt .Þ/ D max vk .i/ (11.288)
i 2 f1; : : : ; Ns g
σ i with t-th binary
component
equal to Þ
An alternative to the FBA is obtained by modifying the VA to yield a soft output (SOVA),
as discussed in the next section.
Convolutional decoding with soft input (SI). The decoder for the convolutional code typ-
ically uses the VA with branch metric (associated with a cost function to be minimized)
given by
2
jj.in/
k ck jj (11.289)
11.3. Convolutional codes 917
Observation 11.4
As we have previously stated, best system performance is obtained by jointly designing the
encoder and the bit mapper. However in some systems, typically radio, an interleaver is
used between the encoder and the bit mapper. In this case joint detection and decoding are
impossible to implement in practice. Detection with soft output followed by decoding with
soft input remains a valid approach, obviously after re-ordering the LLR as determined by
the deinterleaver.
In applications that require a soft output (see Section 11.6), the decoder, that is called in this
case soft-input soft-output (SISO), can use one of the versions of the FBA or the SOVA.7
Sequential decoding
Sequential decoding of convolutional codes represented the first practical algorithm for
ML decoding. It has been employed, for example, for the decoding of signals transmitted
by deep-space probes, such as the Pioneer, 1968 [10]. There exist several variants of
sequential decoding algorithms, that are characterized by the search of the optimum path
in a tree diagram (see Figure 11.9b), instead of along a trellis diagram, as considered, e.g.,
by the VA.
Sequential decoding is an attractive technique for the decoding of convolutional codes
and trellis codes if the number of states of the encoder is large [11]. In fact, as the imple-
mentation complexity of ML decoders such as the Viterbi decoder grows exponentially with
the constraint length of the code, ¹, the complexity of sequential decoding algorithms is
essentially independent of ¹. On the other hand, sequential decoding presents the drawback
that the number of computations Nop required for the decoding process to advance by one
branch in the decoder tree is a random variable with a Pareto distribution, i.e.
P[Nop > N ] D AN ² (11.292)
7 An extension of SISO decoders for the decoding of block codes is found in [8, 9].
918 Chapter 11. Channel codes
where A and ² are constants that depend on the channel characteristics and on the specific
code and the specific version of sequential decoding used. Real-time applications of sequen-
tial decoders require buffering of the received samples. As practical sequential decoders
can perform only a finite number of operations in a given time interval, resynchronization
of the decoder must take place if the maximum number of operations that is allowed for
decoding without incurring buffer saturation is exceeded.
To determine whether it is practical for a receiver to adopt sequential decoding, we recall
the definition of cut-off rate of a transmission channel, and the associated minimum signal-
to-noise ratio .E b =N0 /0 (see page 509). Sequential decoders exhibit very good performance,
with a reduced complexity as compared to the VA, if the constraint length of the code is
sufficiently large and the signal-to-noise ratio is larger than .E b =N0 /0 . If the latter condition
is not verified, the average number of operations required to produce one symbol at the
decoder output is very large.
The Fano Algorithm In this section we consider sequential decoding of trellis codes, a class
of codes that will be studied in detail in Chapter 12. However, the algorithm can be readily
extended to convolutional codes. At instant k, the k0 information bits bk D [bk.1/ ; : : : ; bk.k0 / ]
are input to a rate k0 = .k0 C 1/ convolutional encoder with constraint length ¹ that outputs
the coded bits ck D [ck.0/ ; : : : ; ck.k0 / ]. The k0 C 1 coded bits select from a constellation with
M D 2k0 C1 elements a symbol ak , which is transmitted over an additive white Gaussian
noise channel. Note that the encoder tree diagram has 2k0 branches that correspond to the
values of bk stemming from each node. Each branch is labeled by the symbol ak selected
by the vector ck . The received signal is given by (see (11.275))
z k D a k C wk (11.293)
The received signal sequence is input to a sequential decoder. Using the notation of
Section 8.10.1, in the absence of ISI and assuming a D α, the ML metric to be maximized
can be written as [6]
2 3
X
K 1 6
6log X Pzk jak .²k jÞk /
7
0.α/ D B 7 (11.294)
4 2
Pzk jak .²k jÞi /Pak .Þi / 5
kD0
Þi 2A
Various algorithms have been proposed for sequential decoding [12, 13, 14]. We will
restrict our attention here to the Fano algorithm [6, 11].
The Fano algorithm examines only one path of the decoder tree at any time using the
metric in (11.294). The considered path extends to a certain node in the tree and corresponds
to a segment of the entire code sequence α, up to symbol Þk .
Three types of moves between consecutive nodes on the decoder tree are allowed: forward,
lateral, and backward. On a forward move, the decoder goes one branch to the right in the
decoder tree from the previously hypothesized node. This corresponds to the insertion of a
new symbol ÞkC1 in (11.294). On a lateral move, the decoder goes from a path on the tree to
another path differing only in the last branch. This corresponds to the selection of a different
symbol Þk in (11.294). The ordering among the nodes is arbitrary, and a lateral move takes
place to the next node in order after the current one. A backward move is a move one branch
to the left on the tree. This corresponds to the removal of the symbol Þk from (11.294).
To determine which move needs to be made after reaching a certain node, it is necessary
to compute the metric 0k of the current node being hypothesized, and consider the value of
the metric 0k1 of the node one branch to the left of the current node, as well as the current
value of a threshold T h, which can assume values that are multiples of a given constant 1.
The transition diagram describing the Fano algorithm is illustrated in Figure 11.20. Typically,
1 assumes values that are of the order of the minimum distance between symbols.
As already mentioned, real-time applications of sequential decoding require buffering
of the input samples with a buffer of size S. Furthermore, the depth of backward search
is also finite and is usually chosen to be at least five times the constraint length of the
code. To avoid erasures of output symbols in case of buffer saturation, in [15] a buffer
looking algorithm (BLA) is proposed. The buffer is divided into L sections, each with
size S j ; j D 1; : : : ; L. A conventional sequential decoder (primary decoder) and L 1
secondary decoders are used. The secondary decoders employ fast algorithms, such as the
M-algorithm [16], or variations of the Fano algorithm that are obtained by changing the
value of the bias B in the metric (11.294).
and decoding with soft input, for a system with antipodal signals, yields
0s 1
H 2E
Rc dfree
.dec/ b
Pbit ' AQ@ A (11.297)
N0
Figure 11.21. (a) Block diagram of the encoder and bit mapper for a trellis code for 16-PAM
transmission, (b) structure of the rate-1/2 convolutional encoder, and (c) bit mapping for the
16-PAM format.
0
10
−1
10 16−PAM, uncoded
−2
10
−3
10
8−PAM, uncoded
e
P
−4
10
−5
10
−6
10 SD, depth=64
SD, depth=128
VA, path mem.=64
VA, path mem.=128
−7
10
16 18 20 22 24 26 28
Γ (dB)
Figure 11.22. Symbol error probabilities for the 512-state 16-PAM trellis code with sequential
decoding (depth of search limited to 64 or 128) and 512-state Viterbi decoding (length of path
memory limited to 64 or 128). Symbol error probabilities for uncoded 8-PAM and 16-PAM
transmission are also shown.
The difference metric algorithm (DMA). Figure 11.24 shows a section of a trellis diagram
with four states, where we assume sk D .bk ; bk1 /. We consider two states at instant k 1
that differ in the least significant bit bk2 of the binary representation, that is s.0/
k1 D .00/
11.4. Concatenated codes 923
and s.1/
k1 D .01/. A transition from each of these two states to state sk D .00/ at instant k is
allowed. According to the VA, we choose as survivor sequence the sequence that minimizes
the metric
² .i/
¦
.i / sk1 !sk
min 0k1 .sk1 / C k (11.298)
i 2f0;1g
where fbOk g is the sequence of information bits associated with the ML sequence.
924 Chapter 11. Channel codes
The soft-output VA (SOVA). As for the DMA, the SOVA determines the difference between
the metrics of the survivor sequences on the paths that converge to each state of the trellis,
and updates at every instant k the reliability information λ.sk / for each state of the trellis.
To perform this update, the sequences on the paths that converge to a certain state are
compared to identify the positions at which the information bits of the two sequences
differ. With reference to Figure 11.24, we denote the two paths that converge to the state
(00) at instant k as path 0 and path 1. Without loss of generality we assume that the sequence
associated with path 0 is the survivor sequence, and thus the sequence with the smaller
cost; furthermore we define λ.s.0/ .0/ .0/ .1/ .1/ .1/
k / D f½k ; : : : ; ½kK d g and λ.sk / D f½k ; : : : ; ½kK d g
as the two reliability vectors associated with the information bits of two sequences. If one
information bit along path 0 differs from the corresponding information bit along path 1,
then its reliability is updated according to the rule
for i D k K d ; : : : ; k 1
(11.300)
½i D min.j1k j; ½i.0/ / if bi.0/ .1/
2 6D bi 2
With reference to Figure 11.24, the two sequences on path 0 and on path 1 diverge from
state sk D .00/ at instant k 4. The two sequences differ in the associated information bits
at the instants k and k 1; therefore, only ½k1 will be updated.
Modified SOVA. In the modified version of the SOVA, the reliability of an information
bit along the survivor path is also updated if the information bit is the same, according to
the rule
for i D k K d ; : : : ; k 1
(
min .j1k j; ½i.0/ / if bi.0/ .1/
2 6D bi 2
(11.301)
½i D
min .j1k j C ½i.1/ ; ½i.0/ / if bi.0/ .1/
2 D bi 2
Note that (11.300) is still used to update the reliability if the information bits differ; this
version of the SOVA gives a better estimate of ½i . As proved in [19], if the VA is used
as decoder, the modified SOVA is equivalent to Max-Log-MAP decoding. An example of
how the modified SOVA works is illustrated in Figure 11.25.
Encoding
For the description of turbo codes, we refer to the first code of this class that appeared
in the scientific literature [20, 21]. A sequence of information bits is encoded by a simple
11.5. Turbo codes 925
recursive systematic convolutional (RSC) binary encoder with code rate 1=2, to produce a
first sequence of parity bits, as illustrated in Figure 11.26. The same sequence of information
bits is permuted by a long interleaver and then encoded by a second recursive systematic
convolutional encoder with code rate 1=2 to produce a second sequence of parity bits. Then
the sequence of information bits and the two sequences of parity bits are transmitted. Note
926 Chapter 11. Channel codes
bk
ck(1)
●
ck(2)
● ● D ● D ● D ●
interleaver
ck(3)
● D ● D ● D ●
that the resulting code has rate Rc D 1=3. Higher code rates Rc are obtained by transmitting
only some of the parity bits (puncturing). For example, for the turbo code in [20, 21], a
code rate equal to 1/2 is obtained by transmitting only the bits of the parity sequence 1 with
odd indices, and the bits of the parity sequence 2 with even indices. A specific example of
turbo encoder is reported in Figure 11.27.
The exceptional performance of turbo codes is due to one particular characteristic.
We can think of a turbo code as being a block code for which an input word has a
length equal to the interleaver length, and a code word is generated by initializing to
zero the memory elements of the convolutional encoders before the arrival of each in-
put word. This block code has a group structure. As for the usual block codes, the
asymptotic performance, for large values of the signal-to-noise ratio, is determined by
the code words of minimum weight and by their number. For low values of the signal-
to-noise ratio, also the code words of non-minimum weight and their multiplicity need
to be taken into account. Before the introduction of turbo codes, the focus on design-
ing codes was mainly on asymptotic performance, and thus on maximizing the minimum
distance. With turbo codes, the approach is different. Because of the large ensemble of
code words, the performance curve, in terms of bit error probability as a function of the
signal-to-noise ratio, rapidly decreases for low values of the signal-to-noise ratio. For Pbit
lower than 105 , where performance is determined by the minimum distance between
code words, the bit error probability curve usually exhibits a reduction in the value of
slope.
The two encoders that compose the scheme of Figure 11.26 are called component en-
coders and they are usually identical. As mentioned above, Berrou and Glavieux proposed
two recursive systematic convolutional encoders as component encoders. Later it was shown
that it is not necessary to use systematic encoders [23, 17].
Recursive convolutional codes are characterized by the property that the code bits at
a given instant do not depend only on the information bits at the present instant and the
11.5. Turbo codes 927
previous ¹ instants, where ¹ is the constraint length of the code, but on all the previous
information bits, as the encoder exhibits a structure with feedback.
Starting from a non-recursive nonsystematic convolutional encoder for a code with rate
1=n 0 , it is possible to obtain in a very simple way a recursive systematic encoder for a
code with the same rate and the same code words, and hence with the same free distance
H . Obviously, for a given input word, the output code words will be different in the
dfree
two cases. Consider for example a non-recursive nonsystematic convolutional encoder for
a code with code rate 1=2. The code bits can be expressed in terms of the information bits
as (see (11.254))
(11.302)
c .2/ .D/ D g .2;1/ .D/ b.D/
0
ck.1/ D bk (11.306)
X¹
ck.2/ D gi.2;1/ dki (11.307)
i D0
where, using the fact that g0.1;1/ .D/ D 1, from (11.305) we get
¹
X
d k D bk C gi.1;1/ dki (11.308)
i D1
We recall that the operations in the above equations are in GF(2). Another recursive sys-
tematic encoder that generates a code with the same free distance is obtained by exchanging
the role of the polynomials g .1;1/ .D/ and g .2;1/ .D/ in the above equations.
One recursive systematic encoder corresponding to the non-recursive nonsystematic en-
coder of Figure 11.9(a) is illustrated in Figure 11.28.
The 16-state component encoder for a code with code rate 1=2 used in the turbo code of
Berrou and Glavieux [20, 21], is shown in Figure 11.29. The encoder in Figure 11.27, with
an 8-state component encoder for a code with code rate 1/2, is adopted in the standard for
third generation universal mobile telecommunications systems (UMTS) [24]. Turbo codes
928 Chapter 11. Channel codes
ck(1)
bk dk dk−1 dk−2
D D
ck(2)
Figure 11.28. Recursive systematic encoder that generates a code with the same free
distance as the non-recursive nonsystematic encoder of Figure 11.9(a).
c (1)
k
bk
D D D D
c k(2)
Figure 11.29. A 16-state component encoder for the turbo code of Berrou and Glavieux.
are also used in digital video broadcasting (DVB) [25] standards and in space telemetry ap-
plications as defined by the Consultative Committee for Space Data Systems (CCSDS) [26].
In [27] are listed generator polynomials of recursive systematic convolutional encoders for
codes with rates 1/2, 1/3, 1/4, 2/3, 3/4, 4/5, and 2/4, that can be used for the construction
of turbo codes.
Another fundamental component in the structure of turbo codes is represented by a non-
uniform interleaver. We recall that a uniform8 interleaver, as that described in Section 11.3.2,
operates by writing input bits in a matrix by rows and reading them by columns. In practice,
a non-uniform interleaver determines the permutation of the sequence of input bits so that
adjacent bits in the input sequence are separated, after the permutation, by a number of bits
that varies with the position of the bits in the input sequence. The interleaver determines
directly the minimum distance of the code and therefore performance for high values
of the signal-to-noise ratio. Nevertheless, the choice of the interleaver is not critical for
low values of the signal-to-noise ratio. Beginning with the interleaver originally proposed
8 The adjective “uniform”, referred to an interleaver, is used with a different meaning in [23].
11.5. Turbo codes 929
in [20, 21], various interleavers have since been proposed (see [28] and references contained
therein).
One of the interleavers that yields better performance is the so-called spread inter-
leaver [29]. Consider a block of M1 input bits. The integer numbers that indicate the
position of these bits after the permutation are randomly generated with the following
constraint: each integer randomly generated is compared with the S1 integers previously
generated; if the distance from them is shorter than a prefixed threshold S2 , the generated
integer is discarded and another one is generated until the condition is satisfied. The two
parameters S1 and S2 must be larger than the memory of the two-component encoders. If
the two-component encoders are equal, it is convenient to choose S1 D S2 . The compu-
tation time needed to generate the interleaver increases with S1 and S2 , and there is no
guarantee that the procedure terminates successfully. Empirically,
p it has been verified that,
choosing both S1 and S2 equal to the closest integer to M1 =2, it is possible to generate
the interleaver in a finite number of steps.
Many variations of the basic idea of turbo codes have been proposed. For example, codes
generated by serial concatenation of two convolutional encoders, connected by means of a
non-uniform interleaver [30]. Parallel and serial concatenation schemes were then extended
to the case of multilevel constellations to obtain coded modulation schemes with high
spectral efficiency (see [31] and references contained therein).
Figure 11.30. Principle of the decoder for a turbo code with rate 1=3.
The algorithms for iterative decoding introduced with the turbo codes were also immedi-
ately applied in wider contexts. In fact, this iterative procedure may be used every time the
transmission system includes multiple processing elements with memory interconnected by
an interleaver. Iterative decoding procedures may be used, for example, for detection in the
presence of intersymbol interference, also called turbo equalization or turbo detection [32]
(see Section 11.6), for non-coherent decoding [33, 34], and for joint detection and decoding
in the case of transmission over channels with fading [35].
Before discussing in detail iterative decoding, it is useful to revisit the FBA.
where k0 can be seen either as the number of encoder inputs or as the length of an in-
formation vector. As the convolutional encoder is systematic, at instant k the state of the
11.5. Turbo codes 931
where the superscript .s/ denotes systematic bits, and . p/ denotes parity check bits. Then
. p/
c.s/
k D bk , and from (11.307) we can express ck as a function of sk and sk1 as
. p/
ck D f . p/ .sk ; sk1 / (11.313)
The values assumed by the code vectors are indicated by
β D [ þ .1/ ; þ .2/ ; : : : ; þ .k0 / ; þ .k0 C1/ ; : : : ; þ .n 0 / ] D [ β .s/ ; β . p/ ] (11.314)
For simplicity, we assume that the code binary symbols so determined are transmitted
by a binary modulation scheme over an AWGN channel. In this case, at the decision point
of the receiver, we get the signal (see (11.275)),
z k D ck C wk (11.315)
where ck 2 f1; 1g denotes a code bit, and fwk g is a sequence of real-valued i.i.d. Gaussian
noise samples with variance ¦ I2 . It is useful to organize the samples fz k g into subsequences
that follow the structure of the code vectors (11.312). Then we introduce the vectors
.k / .k0 C1/ .n 0 / . p/
zk D [ z k.1/ ; z k.2/ ; : : : ; z k 0 ; z k ; : : : ; zk ] D [ z.s/
k ; zk ] (11.316)
As usual we denote as ρ k an observation of zk ,
. p/
ρ k D [ ²k.1/ ; ²k.2/ ; : : : ; ²k.k0 / ; ²k.k0 C1/ ; : : : ; ²k.n 0 / ] D [ ρ .s/
k ; ρk ] (11.317)
We recall from Section 8.10 that the FBA yields the detection of the single information
vector bk , k D 0; 1; : : : ; K 1, expressed as
.k /
bO k D [ bOk.1/ ; bOk.2/ ; : : : ; bOk 0 ] (11.318)
through the computation of the a posteriori probability. We also recall that in general for
a sequence a D [a0 ; : : : ; ak ; : : : ; a K 1 ], with the notation alm we indicate the subsequence
formed by the components [al ; alC1 ; : : : ; am ].
932 Chapter 11. Channel codes
We note that the likelihood associated with the individual bits of the information vector bk
are obtained by suitably adding (11.319), as
In a manner similar to the analysis of page 668, we introduce the following quantities:
1. The state transition probability 5. j j i/ D P[sk D σ j j sk1 D σ i ], that assumes
non-zero values only if there is a transition from the state sk1 D σ i to the state
sk D σ j for a certain input β .s/ , and we write
L.a/ .s/
k .β / is called the a priori information on the information vector
bk D β .s/ , and is one of the soft inputs.
2. For an AWGN channel the channel transition probability pzk .ρ k j j; i/ can be sepa-
rated into two contributions, one due to the systematic bits and the other to the parity
check bits,
pzk .ρ k j j; i/ D P[zk D ρ k j sk D σ j ; sk1 D σ i ]
D P[z.s/ .s/
k D ρ k j sk D σ j ; sk1 D σ i ]
. p/ . p/
P[zk D ρk j sk D σ j ; sk1 D σ i ]
. p/ . p/ . p/
D P[z.s/ .s/ .s/ .s/
k D ρ k j ck D β ] P[zk D ρ k j ck D β . p/ ]
00 1 1 (11.323)
k0
1 .s/
B 1 jjρ β .s/ jj2 C
D @@ q A e 2¦ I2 k
A
2³ ¦ I2
00 1n 0 k0 1
1 . p/
B@ 1 A 2 jjρ k β . p/ jj2 C
@ q e 2¦ I A
2³ ¦ I2
11.5. Turbo codes 933
where
jjρ .s/ β .s/ jj2
1
C k.s/ . j j i/ D e 2¦ I2 k L.a/ .s/
k .β / (11.325)
1 . p/
. p/ 2 jjρ k β . p/ jj2
C k . j j i/ D e 2¦ I (11.326)
The two previous quantities are related, respectively, to the systematic bits and the
parity check bits of a code vector. Observe that the exponential term in (11.325)
represents the reliability of a certain a priori information L.a/ .s/
k .β / associated
.s/
with β .
4. The computation of the forward and backward metrics is carried out as in the general
case.
- Forward metric, for k D 0; 1; : : : ; K 1:
Ns
X
Fk . j/ D C k . j j `/ Fk1 .`/ j D 1; : : : ; Ns (11.327)
`D1
written as
P[sk1 D σ i ; sk D σ j ; z0K 1 D ρ 0K 1 ]
K 1 K 1
D P[zkC1 D ρ kC1 j sk1 D σ i ; sk D σ j ; z0k D ρ 0k ]
P[sk D σ j ; zk D ρ k j sk1 D σ i ; z0k1 D ρ 0k1 ]
P[sk1 D σ i ; z0k1 D ρ 0k1 ] (11.330)
K 1 K 1
D P[zkC1 D ρ kC1 j sk D σ j ]
P[sk D σ j ; zk D ρ k j sk1 D σ i ] P[sk1 D σ i ; z0k1 D ρ 0k1 ]
D Bk . j/ Ck . j j i/ Fk1 .i/
and
Ns
X . p/
L.ext/
k .β .s/ / D Bk . j/ Ck . j j i/ Fk1 .i/ (11.333)
i D1
σ j D f s .β .s/ ; σ i /
ii. L.int/
k .β .s/ / depends on the received samples associated with the information vec-
tor and on the channel characteristics.
iii. L.ext/
k .β .s/ / represents the extrinsic information extracted from the received sam-
ples associated with the parity check bits. This is the incremental information
on the information vector obtained by the decoding process.
We associate with each bit of the code vector ck a log-likelihood ratio (LLR) that
depends on the channel (see (11.285)), that is
.in; p/
.in/
k D [ `.in;1/
k ; : : : ; `.in;n
k
0/
] D [ .in;s/
k ; k ] (11.334)
11.5. Turbo codes 935
2
.in/
k D ρk (11.335)
¦ I2
and
n0
. p/ 1 X
`k . j; i/ D `.in;m/ þ .m/ (11.337)
2 mDk C1 k
0
and
0 . p/ . p/
Ck . j j i/ D e`k . j;i /
(11.340)
To compute the forward and backward metrics, we use, respectively, (11.327) and
0 . p/
(11.328), where the variable C k . j j i/ is replaced by Ck . j j i/ D Ck.s/ . j j i/ Ck . j j
0 0
. p/ 0 . p/
i/. Similarly in (11.333) C k . j j i/ is replaced by Ck . j j i/. Taking the logarithm
of (11.333) we obtain the extrinsic component `.ext/k .β .s/ /.
Finally, from (11.331), by ignoring non-essential terms, the log-likelihood associ-
ated with the information vector bk D β .s/ is given by
where `.int/
k .β .s/ / D `.s/ .s/
k .β / is usually called the intrinsic component.
Expression (11.341) suggests an alternative method to (11.333) to obtain
.ext/
`k .β .s/ /, which uses the direct computation of `k .β .s/ / by (11.329) and (11.330),
0
where C k . j j i/ is replaced by Ck . j j i/, whose factors are given in (11.339) and
936 Chapter 11. Channel codes
`.ext/
k .β .s/ / D `k .β .s/ / `.a/ .s/ .int/ .s/
k .β / `k .β / (11.342)
Going back to the expression (11.341), detection of the vector bk is performed ac-
cording to the rule
In this case it is sufficient to determine the log-likelihoods `k .1/ and `k .1/, or better
the LLR
Lk .1/
`k D ln D `k .1/ `k .1/ (11.344)
Lk .1/
Detection of the information bit is performed according to the rule
.a/ P[bk D 1]
`k D ln (11.346)
P[bk D 1]
from which we derive the a priori probabilities
.s/ `.a/ 1 .a/
`.a/ .s/ eþ k e 2 `k 1 .s/ `.a/
P[bk D þ .s/
]De k .β / D D e2þ k þ .s/ D f1; 1g
þ .s/ `.a/ `.a/
1Ce k 1Ce k
(11.347)
By using LLRs, (11.336) yields
`.int/
k D `.int/
k .1/ `.int/
k .1/ D `.in;1/
k D `.in;s/
k (11.348)
The extrinsic component is obtained starting from (11.333) and using the above variables
L.ext/ .1/
`.ext/ D ln k
D `.ext/ .1/ `.ext/ .1/ (11.351)
L.ext/
k k k
k .1/
From (11.341), apart from irrelevant terms, the LLR associated with the information bit bk
can be written as
`k D `.a/ .int/
k C `k C `.ext/
k (11.352)
The decomposition (11.352) forms the basis for the iterative decoding of turbo codes.
Observation 11.5
In the case of multilevel modulation and/or for transmission over channels with ISI, the
previous formulation of the decoding scheme remains unchanged, provided the expression
(11.285) for f`.in;m/
k g; m D 1; : : : ; n 0 , is used in place of (11.335).
Example 11.5.2 (Nonsystematic code and LLR associated with the code bits)
Consider the case of a nonsystematic code. If the code is also non-recursive, for example as
illustrated on page 915 for k0 D 1, we need to use in place of (11.310) the state definition
(11.273).
Now all bits are parity check bits and (11.312) and (11.316) become, respectively,
. p/ .n 0 /
ck D ck D [ ck.1/ ; : : : ; ck ] (11.353)
. p/
zk D zk D [ z k.1/ ; : : : ; z k.n 0 / ] (11.354)
However, the information vector is still given by bk D [ bk.1/ ; : : : ; bk.k0 / ] with values α D
[ Þ .1/ ; : : : ; Þ .k0 / ]; Þ .i / 2 f1; 1g. The likelihood (11.319) is given by
The various terms with superscript .s/ of the previous analysis vanish by setting k0 D 0.
Therefore (11.336) and (11.337) become
`.int/
k .β .s/ / D `.s/ .s/
k .β / D 0 (11.356)
and
n0
. p/ 1X
`k . j; i/ D `.in;m/ þ .m/ (11.357)
2 mD1 k
`k D `.a/ .ext/
k C `k (11.358)
where `.ext/k can be obtained directly using (11.351), (11.340), and (11.333).
.q/
In some applications it is useful to associate a LLR with the encoded bit ck ; q D
1; : : : ; n 0 , rather than to the information bit bk . We define
.q/
P[ck D 1 j z0K 1 D ρ 0K 1 ]
Ǹk;q D ln (11.359)
.q/
P[ck D 1 j z0K 1 D ρ 0K 1 ]
Let Ǹ.a/
k;q be the a priori information on the code bits. The analysis is similar to the previous
.q/
case but now, with respect to the encoder output, ck is regarded as an information bit,
while the remaining bits ck.m/ ; m D 1; : : : ; n 0 ; m 6D q, are regarded as parity check bits.
Equations (11.336), (11.337), (11.349), and (11.350), are modified as follows:
1 .in;q/ .q/
`N.s/ .q/
k;q .β / D `k þ (11.360)
2
n0
. p/ 1 X
`Nk;q . j; i/ D `.in;m/ þ .m/ (11.361)
2 mD1 k
m 6D q
and
.in;q/
C Ǹ.a/
1
.s/
0 þ .q/ .`k k;q /
C k;q . j j i/ D e 2 (11.362)
0 . p/ Ǹ. p/ . j;i /
C k;q . j j i/ D e k;q (11.363)
Example 11.5.3 (Systematic code and LLR associated with the code bits)
With reference to the previous example, if the code is systematic, whereas (11.352) holds
.q/
for the systematic bit ck.1/ , for the parity check bits ck the following relations hold [37].
For k0 D 1 let Þ be the value of the information bit bk , bk D Þ, with Þ 2 f1; 1g, associated
with the code vector
n0
. p/ 1 X 1
`Nk;q . j; i/ D `.in;m/ þ .m/ C `.a/ Þ (11.367)
2 mD1 k 2 k
m 6D q
.a/
where `k is the a priori information of bk . Furthermore
1 .in;q/
.s/
0
þ .q/ `k
C k;q . j j i/ D e 2 (11.368)
0 . p/ Ǹ. p/ . j;i /
C k;q . j j i/ D e k;q (11.369)
Iterative decoding
In this section we consider the iterative decoding of turbo codes with k0 D 1. In this case,
as seen in Example 11.5.1, using the LLRs simplifies the procedure. In general, for k0 > 1
we should refer to the formulation (11.341).
We now give a step-by-step description of the decoding procedure of a turbo code with
rate 1=3, of the type shown in Figure 11.27, where each of the two component decoders
DEC1 and DEC2 implements the FBA for recursive systematic convolutional codes with rate
1=2. The decoder scheme is shown in Figure 11.30 where the subscript in LLR corresponds
to the component decoder. In correspondence to the information bit bk , the turbo code
generates the vector
where ck.1/ D bk . We now introduce the following notation for the observation vector .in/
k
that relates to the considered decoder:
.in; p/ .in; p/
.in/
k D [`.in;s/
k ; `k;1 ; `k;2 ] (11.372)
940 Chapter 11. Channel codes
.in; p/ .in; p/
where `.in;s/
k corresponds to the systematic part, and `k;1 and `k;2 correspond to the
parity check parts generated by the first and second convolutional encoder, respectively.
If some parity check bits are punctured to increase the rate of the code, at the receiver
the corresponding LLRs `.in;m/
k are set to zero.
1. First iteration
P[bk D 1]
`.a/
k;1 D ln D0 (11.373)
P[bk D 1]
.in; p/
For k D 0; 1; 2; : : : ; K 1, observed `.in;s/
k and `k;1 , we compute according
0 .s/ 0 . p/
to (11.349) and (11.350) the variables C k and C k , and from these the cor-
responding forward metric Fk . j/ (11.327). After the entire sequence has been
received, we compute the backward metric Bk .i/ (11.328) and, using (11.333),
we find L.ext/ .ext/
k;1 .1/ and Lk;1 .1/. The decoder soft output is the extrinsic infor-
mation obtained by the LLR
L.ext/
k;1 .1/
`.ext/
k;1 D ln (11.374)
L.ext/
k;1 .1/
1.2 Interleaver
Because of the presence of the interleaver, the parity check bit cn.3/ is obtained in
correspondence to a transition of the convolutional encoder state determined by
the information bit bn , where n depends on the interleaver pattern. In decoding,
the extrinsic information `.ext/
k;1 , extracted from DEC1 , and the systematic obser-
vation `.in;s/
k are scrambled by the turbo code interleaver and associated with
.in; p/
the corresponding observation `n;2 to form the input of the second component
decoder.
1.3 Decoder DEC2
The extrinsic information generated by DEC1 is set as the a priori information
.a/
`n;2 to the component decoder DEC2 ,
1.4 Deinterleaver The deinterleaver realizes the inverse function of the interleaver,
.ext/
so that the extrinsic information extracted from DEC2 , `n;2 , is synchronized
.in; p/
with the systematic part `.in;s/
k and the parity check part `k;1 of the observation
of DEC1 . By a feedback loop the a posteriori information `.ext/
k;2 is placed at
the input of DEC1 as a priori information `.a/
k;1 , giving origin to an iterative
structure.
2. Successive iterations
Starting from the second iteration each component decoder has at its input an a
priori information. The information on the bits become more reliable as the a priori
information stabilizes in sign and increases in amplitude.
3. Last iteration
When the decoder achieves convergence, the iterative process can stop and form the
overall LLR (11.352),
`k;overall D `.in;s/
k C `.ext/ .ext/
k;1 C `k;2 k D 0; 1; : : : ; K 1 (11.376)
To make decoding more reliable, the final state of each component decoder is set to
zero, thus enabling an initialization of the backward metric as in (8.244). As illustrated
in Figure 11.31, at the instant following the input of the last information bit, that is for
k D K , the commutator is switched to the lower position, and therefore we have dk D 0;
after ¹ clock intervals the zero state is reached. The bits ck.1/ and ck.2/ , for k D K ; K C
1; : : : ; K C ¹ 1, are appended at the end of the code sequence to be transmitted.
Performance evaluation
Performance of the turbo code with the encoder of Figure 11.27 is evaluated in terms
of error probability and convergence of the iterative decoder implemented by the FBA.
For the memoryless AWGN channel, error probability curves versus E b =N0 are plotted in
Figure 11.32 for a sequence of information bits of length K D 640, and various numbers of
ck(1)
ck(2)
bk dk dk−1 dk−2
● ● D ● D ● D ●
0
10
−1
10
−2
10
−3
10
P(dec)
bit
−4
10
−5
10
1
−6
10 2
3
4
6
−7 8
10
−0.25 0 0.25 0.5 0.75 1 1.25 1.5 1.75
Eb/N0 (dB)
Figure 11.32. Performance of the turbo code defined by the UMTS standard, with length
of the information sequence K D 640, and various numbers of iterations of the iterative
decoding process.
0
10 Eb/N0=0dB
Eb/N0=0.5dB
Eb/N0=1dB
Eb/N0=1.5dB
−1
10
−2
10
P(dec)
bit
−3
10
−4
10
−5
10
1 2 3 4 5 6 7 8 9 10 11 12
Number of iterations
Figure 11.33. Curves of convergence of the decoder for the turbo code defined by the UMTS
standard, for K D 320 and various values of Eb =N0 .
11.6. Iterative detection and decoding 943
0
10
K=40
K=320
K=640
−1
10
−2
10
−3
10
P(dec)
bit
−4
10
−5
10
−6
10
−7
10
−0.75 −0.5 −0.25 0 0.25 0.5 0.75 1 1.25 1.5 1.75
Eb/N0 (dB)
Figure 11.34. Performance of the turbo code defined by the UMTS standard achieved after
12 iterations, for K D 40, 320 and 640.
iterations of the iterative decoding process. Note that performance improves as the number
of iterations increases; however, the gain between consecutive iterations becomes smaller
as the number of iterations increases.
.dec/
In Figure 11.33, the error probability Pbit is given as a function of the number of iter-
ations, for fixed values of E b =N0 , and K D 320. From the behavior of the error probability
we deduce possible criteria for stopping the iterative decoding process at convergence [36].
A timely stop of the iterative decoding process leads to a reduction of the decoding delay
and of the overall computational complexity of the system. Note, however, that convergence
is not always guaranteed.
The performance of the code depends on the length K of the information sequence.
Figure 11.34 illustrates how the bit error probability decreases by increasing K , for a
constant E b =N0 . A higher value of K corresponds to an interleaver on longer sequences
and thus the assumption of independence among the inputs of each component decoder is
better satisfied. Moreover, the burst errors introduced by the channel are distributed over
all the original sequence, increasing the correction capability of the decoder. As the length
of the interleaver grows, also the latency of the system increases.
bl cm cn ck ak
convolutional interleaver BMAP modulator
code S/P
Figure 11.35. Encoder structure, bit mapper, and modulator; for 16-PAM: ck D [c4k ;
c4k1 ; c4k2 ; c4k3 ].
(a)
`n,det interleaver
bit
likelihood
(a)
(a,SYM) `m,dec=0
`k,det (γ) SISO (ext)
SISO
`n,det `m,dec
(ext)
`m,dec
(in)
rk detector deinterleaver decoder
SI
decoder
^
bl
code (SCCC). The procedure of SISO detection and SI decoding of page 916 can be made
iterative by applying the principles of the previous section, by including a SISO decoding
stage. With reference to Figure 11.36, a step by step description follows.
0. Initialization
Suppose we have no information on the a priori probability of the code bits, therefore
we associate with cn a zero LLR,
.a/
`n;det D0 (11.378)
1. Detector
First we associate a log-likelihood with the two possible values of cn D ,
2 f1; 1g, according to the rule
Then we express the symbol ak as a function of the bits fcn g according to the bit
mapper, for example, for 16-PAM,
Assuming the sequence fcn g is a sequence of i.i.d. binary symbols, we associate with
each value of the symbol ak the a priori information expressed by the log-likelihood
X
3
`.a;SY
k;det
M/
. / D `.a/
4kt;det .t / 2A (11.381)
tD0
To determine the extrinsic information associated with fcn g, we subtract the a priori
information from (11.384),
.ext/ .a/
`n;det D `n;det `n;det (11.385)
Note that in this application, the detector considers the bits fcn g as information bits
and the log-likelihood associated with cn at the detector output is due to the channel
information9 in addition to the a priori information.
.a/
In [38], the quantity `n;det in (11.385) is weighted by a coefficient, which is
initially chosen small, when the a priori information is not reliable, and is increased
after each iteration.
2. Deinterleaver
.ext/
The metrics `n;det are re-ordered according to the deinterleaver to provide the se-
quence `.ext/
m;det .
9 For the iterative decoding of turbo codes, this information is defined as intrinsic.
946 Chapter 11. Channel codes
3. Decoder (SISO)
As input LLR, we use
`.in/ .ext/
m;dec D `m;det (11.386)
and we set
`.a/
m;dec D 0 (11.387)
in the lack of an a priori information on the code bits fcm g. Indeed, we note that in
the various formulae the roles of `.a/ .in/
m;dec and `m;dec can be interchanged.
Depending on whether the code is systematic or not, we use the SISO decoding
procedure reported in Example 11.5.1 and Example 11.5.2, respectively. In both cases
we associate with the encoded bits cm the quantity
`.ext/ .in/
m;dec D `m;dec `m;dec (11.388)
that is passed to the SISO detector as a priori information, after reordering by the
interleaver.
4. Last iteration
After a suitable number of iterations, the various metrics stabilize and from the LLRs
f`.in/
m;dec g associated with fcm g, the SI decoding of bits fbl g is performed, using the
procedure of Example 11.5.1.
check matrix H, such that the equation Hc D 0 is satisfied for all code words c (see
(11.20)). Each row of the r0 ð n 0 parity check matrix, where r0 D n 0 k0 is the num-
ber of parity check bits, defines a parity check equation that is satisfied by each code
word c. For example, the (7,4) Hamming code is defined by the following parity check
equations
2 3
c1
6 c2 7
2 3 6 7
1 1 1 0 1 0 0 6 6 c3 7
7 c5 D c1 ý c2 ý c3 (check 1)
4 1 1 0 1 0 1 0 5 6 c4 7 D 0 ! c6 D c1 ý c2 ý c4 (check 2) (11.389)
6 7
1 0 1 1 0 0 1 6 6 c5 7
7 c7 D c1 ý c3 ý c4 (check 3)
4 c6 5
c7
LDPC codes differ in major ways with respect to the above simple example; they usually
have long block lengths n 0 in order to achieve near Shannon-limit performance, their parity
check matrices are defined in nonsystematic form and exhibit a number of ones that is much
less than r0 Ð n 0 . A parity check matrix for a .J; K /-regular LDPC code has exactly J ones
in each of its columns and K ones in each of its rows.
A parity check matrix can generally be represented by a bipartite graph, also called a
Tanner graph, with two types of nodes: the bit nodes and the parity check nodes (or check
nodes) [41]. A bit node n, representing the code bit cn , is connected to the check node m
only if the element .m; n/ of the parity check matrix is equal to 1. No bit (check) node is
connected to a bit (check) node. For example, the (7,4) Hamming code can be represented
by the graph shown in Figure 11.37.
We note in this specific case that, because the parity check matrix is given in systematic
form, bit nodes c5 , c6 , and c7 in the associated graph are connected to single distinct check
nodes. The parity check matrix of a .J; K /-regular LDPC code leads to a graph where
every bit node is connected to precisely J check nodes and every check node is connected
to precisely K bit nodes. We emphasize that the performance of an LDPC code depends on
the random realization of the parity check matrix H. Hence these codes form a constrained
random code ensemble.
Graphical representations of LDPC codes are useful for deriving and implementing the
iterative decoding procedure introduced in [6]. Gallager decoder is a message-passing
Figure 11.37. Tanner graph corresponding to the parity check matrix of the (7,4) Hamming
code.
948 Chapter 11. Channel codes
decoder, in a sense to be made clear below, based on the so-called sum-product algorithm,
which is a general decoding algorithm for codes defined on graphs.10
Encoding procedure
Encoding is performed by multiplying the vector of k0 information bits b by the generator
matrix G of the LDPC code:
cT D bT G (11.390)
where the operations are in GF(2). Recall that generator and parity check matrices satisfy
the relation
HGT D 0 (11.391)
From (11.27), the parity check matrix in systematic form is H Q D [A;Q I], where I is the
Q is a binary matrix. Recall also that any other r0 ð n 0 matrix
r0 ð r0 identity matrix, and A
H whose rows span the same space as H Q is a valid parity check matrix.
Given the block length n 0 of the transmitted sequence and the block length k0 of the
information sequence, we select a column weight J , greater than or equal to 3. To define
the code, we generate a rectangular r0 ð n 0 matrix H D [A B] at random with exactly J
ones per column and, assuming a proper choice of n 0 and k0 , exactly K ones per row.
The r0 ð k0 matrix A and the square r0 ð r0 matrix B are very sparse. If the rows of H
are independent, which is usually true with high probability if J is odd [40], by Gaussian
elimination and reordering of columns we determine an equivalent parity check matrix H Q
in systematic form. From (11.26), we obtain the generator matrix in systematic form as
½ ½
I I
GT D Q D (11.392)
A B1 A
where I is the k0 ð k0 identity matrix.
Assuming initially antipodal linear signaling over an ideal AWGN channel, for the vector
of transmitted symbols a D [a1 ; : : : ; an 0 ]T , ak 2 f1; 1g, corresponding to the code word
c D [c1 ; : : : ; cn 0 ]T , the received vector is given by
zDaCw (11.393)
where w denotes a vector of Gaussian noise samples with variance ¦ I2 .
Decoding algorithm
Adopting the MAP criterion (8.221), the optimal decoder returns the components of the
vector bO D [bO1 ; : : : ; bOk0 ] that maximize the a posteriori probabilities
bOk D arg max P[bk D þ j z D ρ; G] k D 1; : : : ; k0 (11.394)
þ2f0;1g
10 A wide variety of other algorithms (e.g., the Viterbi algorithm, the forward backward algorithm, the iterative
turbo decoding algorithm, the fast Fourier transform, : : : ) can also be derived as specific instances of the
sum-product algorithm [42].
11.7. Low-density parity check codes 949
Note that (11.394) is equivalent to the MAP criterion expressed by (11.321). However, an
attempt to evaluate (11.394) by the direct computation of the joint probability distribution of
the components of b given the observation would be impractical. Assuming the probability
of b uniform, and w statistically independent of b, we resort to the knowledge of the parity
check matrix to simplify the decoding problem. We will find the most likely binary vector
x such that (see (11.20))
s D Hx D 0 (11.395)
given the received noisy vector z and a valid parity check matrix H.
We call checks the elements si ; i D 1; : : : ; r0 , of the vector s, which are represented
by the check nodes in the corresponding Tanner graph. Then the aim is to compute the
marginal a posteriori probabilities
We define as Hi;n the element with indices .i; n/ of the parity check matrix H. Let L.i/ D
fn : Hi;n D 1g; i D 1; : : : ; r0 , be the set of the bit nodes that participate in the check with
index i. Also, let L.i/nnQ be the set L.i/ from which the element with index nQ has been
removed. Similarly, let M.n/ D fi : Hi;n D 1g; n D 1; : : : ; n 0 , be the set of the check
nodes in which the bit with index n participates.
The algorithm consists of two alternating steps, illustrated in Figure 11.38, in which
þ þ
quantities qi;n and ri;n , associated with each non-zero element of the matrix H, are iteratively
þ
updated. The quantity qi;n denotes the probability that xn D þ; þ 2 f0; 1g, given the
information obtained via checks other than check i:
þ
qi;n D P[xn D þ j fsi 0 D 0; i 0 2 M.n/nig; z D ρ] (11.398)
þ
Given xn D þ; þ 2 f0; 1g, the quantity ri;n denotes the probability of check i being
þ
satisfied and the other bits having a known distribution (given by the probabilities fqi;n 0 :
n 0 2 L.i/nn; þ 2 f0; 1gg):
þ
ri;n D P[si D 0; fxn 0 ; n 0 2 L.i/nng j xn D þ; z D ρ] (11.400)
þ
In the first step, the quantities ri;n associated with check node i are updated and passed
as messages to the bit nodes checked by check node i. This operation is performed for all
þ
check nodes. In the second step, quantities qi;n associated with bit node n are updated and
passed as messages to the check nodes that involve bit node n. This operation is performed
for all bit nodes.
From (11.395), we note the property of (11.400) that
0
ri;n D 1 P[si D 1; fxn 0 ; n 0 2 L.i/nng j xn D 0; z D ρ]
D 1 P[si D 0; fxn 0 ; n 0 2 L.i/nng j xn D 1; z D ρ] (11.401)
1
D 1 ri;n
Initialization. Let pn0 D P[xn D 0 j z D ρ] denote the probability that xn D 0 given the
observation, and pn1 D P[xn D 1 j z D ρ] D 1 pn0 . For the AWGN channel with
binary antipodal input symbols considered in this section, we have (see (8.262))
1 1
pn0 D pn1 D (11.402)
1Ce 2²n =¦ I2 1 C e2²n =¦ I
2
þ þ
Let qi;n D pn ; n 2 L.i/; i D 1; : : : ; r0 ; þ 2 f0; 1g.
First step. We run through the checks, and for the i-th check we compute for each n 2 L.i/
0 that, given x D 0, s D 0 and the other bits fx 0 : n 0 6D ng have
the probability ri;n n i n
a distribution fqi;n 0 ; qi;n
0 1 g.
0
Example 11.7.1
Assume K D 4 and L.i/ D fn 1 ; n 2 ; n 3 ; n 4 g. The observation si can be expressed in
terms of the input variables xk , k 2 L.i/, as
X
K
si D xn 1 C xn 2 C xn 3 C xn 4 D xnl (11.404)
lD1
where j denotes the one’s complement of j, with the initial condition Fn 0 .0/ D 1.
2. Backward metric:
Bn k . j/ D P[si D 0 j sn k D j]
X
1
D P[si D 0 j sn kC1 D m; sn k D j]P[sn kC1 D m j sn k D j]
mD0
X
1
D P[si D 0 j sn kC1 D m]P[x n kC1 D m ý j] j 2 f0; 1g
mD0
(11.408)
using (11.405) and the fact that si is independent of sn k given sn kC1 . From
(11.408), we obtain the recursive equation
Bn k . j/ D P[xn kC1 D j]Bn kC1 .0/ C P[xn kC1 D j ]Bn kC1 .1/ k D 1; : : : K
(11.409)
with the initial condition Bn K C1 .0/ D 1, which is obtained from the observation
si D 0.
þ
Therefore the probabilities ri;n k , þ 2 f0; 1g are given by (see (8.244))
0
ri;n k
D Fn k .0/Bn k .0/ C Fn k .1/Bn k .1/ k D 1; : : : ; K (11.410)
1 D 1 r0 .
and ri;n k i;n k
952 Chapter 11. Channel codes
where Þi;n is chosen such that qi;n 0 C q 1 D 1. Taking into account the information
i;n
from all check nodes, from (11.399) we can also compute the “pseudo a posteriori
probabilities” qn0 and qn1 at this iteration, given by
Y
qn0 D Þn pn0 0
ri;n (11.413)
i 2M.n/
Y
qn1 D Þn pn1 1
ri;n (11.414)
i 2M.n/
We note that the sum–product algorithm for the decoding of LDPC codes has been
derived under the assumption that the check nodes si , i D 1; : : : ; r0 , are statistically in-
dependent given the bit nodes xn , n D 1; : : : ; n 0 , and vice versa, i.e. the variables of the
vectors s and x form a Markov field [42]. Although this assumption is not strictly true, it
turns out that the algorithm yields very good performance with low computational com-
plexity. However, we note that parity check matrices leading to Tanner graphs that exhibit
cycles of length four, such as the one depicted in Figure 11.39, should be avoided. In
fact, this structure would introduce non-negligible statistical dependence between nodes. In
graph theory, the length of the shortest cycle in a graph is referred to as girth. A general
method for constructing Tanner graphs with large girth is described in [45].
11.7. Low-density parity check codes 953
Example of application
We study in this section the application of binary LDPC codes to two-dimensional QAM
transmission over an AWGN channel [46]. The block diagrams of the encoding and decod-
ing processes are shown in Figure 11.40.
For bit mapping, log2 M code bits are mapped into one QAM symbol taken from an
M-point constellation using Gray mapping. At the receiver, the received samples, which
represent noisy QAM symbols, are input to a soft detector that provides soft information
on individual code bits in the form of a posteriori probabilities. These probabilities are
employed to carry out the message-passing LDPC decoding procedure described in the
previous section.
Assuming that the employed QAM constellation is square, with log2 M equal to an
even number, and that the in-phase and quadrature noise components are independent, it
is computationally advantageous to perform soft detection independently for the real and
imaginary parts of the received complex samples. We will therefore consider only square
QAM constellations. Bit mapping for the real or the imaginary part of transmitted QAM
symbols is performed by mapping a group of 12 log2 M code bits [c0 ; c1 ; : : : ; c 1 ]
p 2 .log2 M/1
that are part of a code word into one of the M real symbols within the set
p p p
A D f. M 1/; . M 3/; : : : ; 1; C1; : : : ; C. M 1/g (11.416)
Denoting by z n the real or the imaginary part of a noisy received signal, we have
z n D an C wn (11.417)
Þ2A
c` Dþ 1
P[c` D þ j z n D ²n ] D ` D 0; 1; : : : ; .log2 M/ 1 þ 2 f0; 1g
.² Þ/2 2
X n 2
e 2¦ I
Þ2A
(11.418)
where the summation in the numerator is taken over all symbols an 2 A for which c` D þ,
þ 2 f0; 1g.
where 0 is the signal-to-noise ratio given by (6.105). In general, the relation between M
and the rate of the encoder-modulator is given by (11.1),
k0 log2 M
RI D (11.420)
n0 2
Recall also, from (6.191), that the signal-to-noise ratio per dimension is given by
Eb
0 I D 0 D 2R I (11.421)
N0
Using (6.288) we introduce the normalized signal-to-noise ratio
0I 2R I E b
0I D D 2R (11.422)
22R I 1 2 I 1 N0
Then for an uncoded M-QAM system we express (11.419) as
q
Pe ' 4Q 30 I (11.423)
As illustrated in Figure 6.54, the curve of Pe versus 0 I indicates that the “gap to
capacity” for uncoded QAM with M × 1 is equal to 0 gap;d B ' 9:8 dB at a symbol error
probability of 107 . We therefore determine the value of the normalized signal-to-noise
c
ratio 0 I needed for the coded system to achieve a symbol error probability of 107 , and
compute the coding gain at that symbol error probability as
c
G code D 9:8 10 log10 .0 I / dB (11.424)
11.7. Low-density parity check codes 955
Table 11.22 LDPC codes considered for the simulation and coding gains
achieved at a symbol error probability of 107 for different QAM constellations.
The spectral efficiencies ¹ are also indicated.
From Figure 6.54, as for large signal-to-noise ratios the Shannon limit cannot be approached
to within less than 1.53 dB without shaping, we note that an upper limit to the coding
gain measured in this manner is about 8:27 dB. Simulation results for three high-rate
.n 0 ; k0 / binary LDPC codes are specified in Table 11.22 in terms of the coding gains
obtained at a symbol error probability of 107 for transmission over an AWGN channel
for 16, 64, and 4096-QAM modulation formats. Transmitted QAM symbols are obtained
from coded bits via Gray mapping. To measure error probabilities, one code word is de-
coded using the message-passing (sum-product) algorithm for given maximum number
of iterations. Figure 11.41 shows the effect on performance of the maximum number of
Figure 11.41. Performance of LDPC decoding with Code 2 and 16-QAM for various values
of the maximum number of iterations.
956 Chapter 11. Channel codes
iterations allowed in the decoding process for code 2 specified in Table 11.22 and 16-QAM
transmission.
The codes given in Table 11.22 are due to MacKay and have been obtained by a random
construction method. The results of Table 11.22 indicate that LDPC codes offer net coding
gains that are similar to those that have been reported for turbo codes. Furthermore, LDPC
codes achieve asymptotically an excellent performance without exhibiting “error floors”
and admit a wide range of trade-offs between performance and decoding complexity.
Bibliography
[1] S. Lin and D. J. Costello Jr., Error control coding. Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[2] R. E. Blahut, Theory and practice of error control codes. Reading, MA: Addison-
Wesley, 1983.
[3] W. W. Peterson and E. J. Weldon Jr., Error-correcting codes. Cambridge, MA: MIT
Press, 2nd ed., 1972.
[4] J. K. Wolf, Lecture notes. San Diego, CA: University of California.
[5] S. B. Wicker and V. K. Bhargava, eds, Reed-Solomon codes and their applications.
Piscataway, NJ: IEEE Press, 1994.
[6] R. Gallager, Information theory and reliable communication. New York: John Wiley
& Sons, 1968.
[7] J. Hagenauer and P. Hoeher, “A Viterbi algorithm with soft-decision output and its
applications”, in Proc. GLOBECOM ’89, Dallas, Texas, pp. 2828–2833, Nov. 1989.
[8] M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear block codes based on
ordered statistics”, IEEE Trans. on Information Theory, vol. 41, pp. 1379–1396, Sept.
1995.
[9] M. P. C. Fossorier and S. Lin, “Soft-input soft-output decoding of linear block codes
based on ordered statistics”, in Proc. GLOBECOM ’98, Sidney, Australia, pp. 2828–
2833, Nov. 1998.
[10] D. J. Costello Jr., J. Hagenauer, H. Imai, and S. B. Wicker, “Applications of error-
control coding”, IEEE Trans. on Information Theory, vol. 44, pp. 2531–2560, Oct.
1998.
[11] R. M. Fano, “A heuristic discussion on probabilistic decoding”, IEEE Trans. on In-
formation Theory, vol. 9, pp. 64–74, Apr. 1963.
[12] K. Zigangirov, “Some sequential decoding procedures”, Probl. Peredachi Informatsii,
vol. 2, pp. 13–25, 1966.
11. Bibliography 957
[13] F. Jelinek, “An upper bound on moments of sequential decoding effort”, IEEE Trans.
on Information Theory, vol. 15, pp. 140–149, Jan. 1969.
[14] F. Jelinek, “Fast sequential decoding algorithm using a stack”, IBM Journal of Re-
search and Development, vol. 13, pp. 675–685, Nov. 1969.
[17] S. Benedetto and E. Biglieri, Principles of digital transmission with wireless applica-
tions. New York: Kluwer Academic Publishers, 1999.
[18] G. D. Forney, Jr., Concatenated codes. Cambridge, MA: MIT Press, 1966.
[19] M. P. C. Fossorier, F. Burkert, S. Lin, and J. Hagenauer, “On the equivalence between
SOVA and max-log-MAP decodings”, IEEE Communications Letters, vol. 2, pp. 137–
139, May 1998.
[21] C. Berrou and A. Glavieux, “Near optimum error-correcting coding and decoding:
turbo-codes”, IEEE Trans. on Communications, vol. 44, pp. 1261–1271, Oct. 1996.
[23] S. Benedetto and G. Montorsi, “Unveiling turbo codes: some results on parallel con-
catenated coding schemes”, IEEE Trans. on Information Theory, vol. 42, pp. 409–428,
Mar. 1996.
[24] 3-rd Generation Partnership Project (3GPP), Technical Specification Group (TSG),
Radio Access Network (RAN), Working Group 1 (WG1), “Multiplexing and channel
coding (TDD)”, Document TS 1.22, v.2.0.0, Apr. 2000.
[26] Consultative Committee for Space Data Systems (CCSDS), Telemetry Systems (Panel
1), “Telemetry channel coding”, Blue Book, CCSDS 101.0-B-4: Issue 4, May 1999.
[27] S. Benedetto, R. Garello, and G. Montorsi, “A search for good convolutional codes to
be used in the construction of turbo codes”, IEEE Trans. on Communications, vol. 46,
pp. 1101–1105, Sept. 1998.
958 Chapter 11. Channel codes
[28] O. Y. Takeshita and D. J. Costello Jr., “New deterministic interleaver designs for turbo
codes”, IEEE Trans. on Information Theory, vol. 46, pp. 1988–2006, Sept. 2000.
[29] D. Divsalar and F. Pollara, “Turbo codes for PCS applications”, in Proc. 1995 IEEE
Int. Conference on Communications, Seattle, U.S.A., pp. 54–59, June 1995.
[31] S. Benedetto and G. Montorsi, “Versatile bandwidth-efficient parallel and serial turbo-
trellis-coded modulation”, in Proc. 2000 Intern. Symp. on Turbo Codes & Relat. Topics,
Brest, France, pp. 201–208, Sept. 2000.
[36] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary block and con-
volutional codes”, IEEE Trans. on Information Theory, vol. 42, pp. 429–445, Mar.
1996.
[39] D. MacKay and R. Neal, “Near Shannon limit performance of low density parity
check codes”, Electron. Lett., vol. 32, pp. 1645–1646, Aug. 1996.
[40] D. MacKay, “Good error-correcting codes based on very sparse matrices”, IEEE Trans.
on Information Theory, vol. 45, pp. 399–431, Mar. 1999.
[42] B. J. Frey, Graphical models for machine learning and digital communications. Cam-
bridge, MA: MIT Press, 1998.
11. Bibliography 959
[45] X.-Y. Hu, E. Eleftheriou, and D. M. Arnold, “Progressive edge-growth Tanner graphs”,
in Proc. GLOBECOM ’01, San Antonio, TX, Nov. 2001.
R
[46] G. Cherubini, E. Eleftheriou, and S. Olcer, “On advanced signal processing and coding
techniques for digital subscriber lines”, Records of the Workshop “What is next in
xDSL?”, Vienna, Austria, Sept. 2000.
960 Chapter 11. Channel codes
Assume that code words are sequences of symbols from the finite field G F.q/ (see
Section 11.2.2), all of length n. As there are q n possible sequences, the introduction of
redundancy in the transmitted sequences is possible if the number of code words Mc is less
than q n .
We denote by c a transmitted sequence of n symbols taken from G F.q/. We also assume
that the symbols of the received sequence z are from the same alphabet. We define the error
sequence e by the equation (see (11.12) for the binary case)
zDcCe (11.425)
Definition 11.16
The number of non-zero components of a vector x is defined as the weight of the vector,
denoted by w.x/.
Then w.e/ is equal to the number of errors occurred in transmitting the code word.
Definition 11.17
The minimum distance of a code, denoted dmin H , is equal to the minimum Hamming distance
between all pairs of code words; i.e. it is the same as for binary codes.
We will give without proof the following propositions, similar to those for binary codes on
page 830.
H can correct all error sequences
1. A nonbinary block code with minimum distance dmin
j H k
dmin 1
of weight 2 or less.
H ,
As in the binary case, we ask for a relation among the parameters of a code: n, Mc , dmin
and q. It can be proved that for a block code with length n and minimum distance dmin H ,
Linear codes
Definition 11.18
A linear code is a block code with symbols from G F.q/ for which:
Example 11.A.1
A binary group code is a linear code with symbols from G F.2/.
Example 11.A.2
Consider a block code of length 5 having symbols from G F.3/ with code words
0 0 0 0 0
1 0 0 2 1
0 1 1 2 2
2 0 0 1 2
1 1 1 1 0 (11.428)
2 1 1 0 1
0 2 2 1 1
1 2 2 0 2
2 2 2 2 0
By Property b), .c2 / is a code word if c2 is a code word; by Property c), c1 C .c2 /
H positions, there
must also be a code word. As two code words differ in at least dmin
is a code word of weight dminH ; if there were a code word of weight less than d H ,
min
this word would be different from the zero word in fewer than dminH positions.
2. If all code words in a linear code are written as rows of an Mc ð n matrix, every
column is composed of all zeros, or contains all elements of the field, each repeated
Mc =q times.
Property 2 of nonbinary generalized parity check codes. The code words corresponding
to the matrix H D [A B] are identical to the code words corresponding to the parity check
matrix HQ D [B1 A; I].
Proof. Same as for the binary case.
The matrices in the form [A I] are said to be in canonical or systematic form.
Property 3 of nonbinary generalized parity check codes. A code consists of exactly q nr D
q k code words.
Proof. Same as for the binary case (see Property 3 on page 834).
The first k D n r symbols are called information symbols, and the last r symbols are
called generalized parity check symbols.
11.A. Nonbinary parity check codes 963
Property 4 of nonbinary generalized parity check codes. A code word of weight w exists
if and only if some linear combination of w columns of the matrix H is equal to 0.
Proof. c is a code word if and only if Hc D 0. Let c j be the j-th component of c and let
hi be the i-th column of H; then if c is a code word we have
X
n
hj cj D 0 (11.433)
jD1
Property 5 of nonbinary generalized parity check codes. A code has minimum distance
H if some linear combination of d H columns of H is equal to 0, but no linear combi-
dmin min
H number of columns of H is equal to 0.
nation of fewer than dmin
Property 5 is fundamental for the design of nonbinary codes.
Example 11.A.3
Consider the field G F.4/, and let Þ be a primitive element of this field; moreover consider
the generalized parity check matrix
½
1 1 1 1 0
HD (11.435)
1 Þ Þ2 0 1
We find that no linear combination of two columns is equal to 0. However, there are many
linear combinations of three columns that are equal to 0, for example, h1 C h4 C h5 D 0,
Þh2 C Þh4 C Þ 2 h5 D 0, ....; hence the minimum distance of this code is dmin
H D 3.
The matrix G is called the generator matrix of the code and is expressed as
G D [I; AT ] (11.438)
964 Chapter 11. Channel codes
so that
cT D .cnr
1 / G
T
(11.439)
Thus the code words, considered as row vectors, are given as all linear combinations of
the rows of the matrix G. A nonbinary generalized parity check code can be specified by
giving its generalized parity check matrix or its generator matrix.
Example 11.A.4
Consider the field G F.4/ and let Þ be a primitive element of this field; moreover, consider
the generalized parity check matrix (11.435). The generator matrix of this code is given by
2 3
1 0 0 1 1
GD40 1 0 1 Þ 5 (11.440)
0 0 1 1 Þ2
There are 64 code words corresponding to all linear combinations of the rows of the
matrix G.
Coset
We give the following properties of the cosets omitting the proofs.
1. Every one of the q n vectors occurs in one and only one coset.
2. Suppose that, instead of choosing ηi as coset leader of the i-th coset, we choose
another element of that coset as the coset leader; then the coset formed by using the
new coset leader contains exactly the same vectors as the old coset.
3. There are q r cosets.
11.A. Nonbinary parity check codes 965
Syndrome decoding
Another method of decoding is the syndrome decoding. For any generalized parity check
matrix H and all vectors z of length n, we define the syndrome of z, s.z/, as
s.z/ D Hz (11.441)
We can show that all vectors in the same coset have the same syndrome and vectors in
different cosets have different syndromes. This leads to the following decoding method:
Step 100 : compute the syndrome of the received vector, as this syndrome identifies the coset
in which the received vector is in, and so identifies the leader of that coset.
Step 200 : subtract the coset leader from the received vector to find the decoded code word.
The difficulty with this decoding method is in the second part of step 100 , that is identi-
fying the coset leader that corresponds to the computed syndrome; this step is equivalent
to finding a linear combination of the columns of H which is equal to that syndrome, using
the smallest number of columns. The algebraic structure of the generalized parity check
matrix for certain classes of codes allows for algebraic means of finding the coset leader
from the syndrome.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 12
During the 1980s an evolution in the methods to transmit data over channels with limited
bandwidth took place, giving origin to techniques for joint coding and modulation that are
generally known by the name of trellis coded modulation (TCM). The main characteristic
of TCM lies in the fact that it yields coding gains with respect to conventional modulation
techniques without requiring that the channel bandwidth be increased. The first article on
TCM appeared in 1976 by Ungerboeck; later, a more detailed publication by the same author
on the principles of TCM [1] spurred considerable interest in this topic [2, 3, 4, 5, 6, 7, 8],
leading to a full development of the theory of TCM.
TCM techniques use multilevel modulation with a set of signals from a one, two, or multi-
dimensional space. The choice of the signals that generate a code sequence is determined
by a finite-state encoder. In TCM, the set of modulation signals is expanded with respect to
the set used by an uncoded, i.e. without redundancy, system; in this manner, it is possible
to introduce redundancy in the transmitted signal without widening the bandwidth. At the
receiver, the signals in the presence of additive noise and channel distortion are decoded
by a maximum likelihood sequence decoder. By simple TCM techniques using a four-state
encoder, it is possible to obtain a coding gain of 3 dB with respect to conventional uncoded
modulation; with more sophisticated TCM techniques, coding gains of 6 dB or more can
be achieved (see Chapter 6).
Errors in the decoding of the received signal sequence are less likely to occur if the
waveforms, which represent the code sequences, are easily distinguishable from each other;
in mathematical terms, the signal sequences, represented in the Euclidean multidimensional
space, need to be separated by large distances. The novelty of TCM is in postulating the
expansion of the set of symbols1 in order to provide the redundancy necessary for the
encoding process. The construction of modulation code sequences that are characterized
by a free distance, i.e. the minimum Euclidean distance between code sequences, that is
much larger than the minimum distance between uncoded modulation symbols, with the
same information bit rate, and the same bandwidth and power of the modulated signal,
is obtained by the joint design of encoder and bit mapper. The term trellis derives from
the similarity between state transition diagrams of a TCM encoder and trellis diagrams
1 In the first part of this chapter we mainly use the notion of symbols of an alphabet with cardinality M, although
the analysis could be conducted by referring to vectors in the signal space as modulation signals. We will use
the term “signals” instead of “symbols” only in the multidimensional case.
968 Chapter 12. Trellis coded modulation
of binary convolutional codes; the difference lies in the fact that, in TCM schemes, the
branches of the trellis are labeled with modulation symbols rather than binary symbols.
Thanks to the use of sophisticated TCM schemes, it was possible to achieve reliable
data transmission over telephone channels at rates much higher than 9.6 kbit/s, which for
years was considered the practical limit. In the mid-1980s, the rate of 14.4 kbit/s was
reached. Transmission at a maximum bit rate of 28.8 kbit/s was later specified in the stan-
dard CCITT V.34, and extensions were proposed to achieve the rates of 31.2 kbit/s and
33.6 kbit/s.
Example 12.1.1
Consider an uncoded 4-PSK system and an 8-PSK system that uses a binary error correcting
code with rate 2/3; both systems transmit two information bits per modulation interval,
which corresponds to a spectral efficiency of 2 bit/s/Hz. If the 4-PSK system works with
an error probability of 105 , for a given signal-to-noise ratio 0, the 8-PSK system works
with an error probability larger than 102 , due to the smaller Euclidean distance between
signals of the 8-PSK system. We must use an error correcting code with minimum Hamming
H ½ 7 to reduce the error probability to the same value of the uncoded 4-PSK
distance dmin
system. A binary convolutional code with rate 2/3 and constraint length 6 has the required
H D 7. Decoding requires a decoder with 64 states that implements the Viterbi
value of dfree
algorithm. However, even after increasing the complexity of the 8-PSK system, we have
obtained an error probability only equal to that of the uncoded 4-PSK system.
Two problems determine the unsatisfactory result obtained with the traditional approach.
The first is originated by the use of independent hard decisions taken by the detector before
12.1. Linear TCM for one- and two-dimensional signal sets 969
Figure 12.1. Block diagram of a transmission system with trellis coded modulation.
decoding; hard input decoding leads to an irreversible loss of information; the remedy is the
use of soft decoding (see page 912), whereby the decoder directly operates on the samples
at the demodulator output. The second derives from the independent design of encoder and
bit mapper.
We now consider the transmission system of Figure 12.1, where the transmitted symbol
sequence fak g is produced by a finite-state machine having the information bit sequence
fb` g as input, possibly with a number of information bits per modulation interval larger than
one. We denote by 1=T the modulation rate and by A the alphabet of ak . For an AWGN
channel, at the decision point the received samples in the absence of ISI are given by (see
(8.173))
z k D a k C wk (12.1)
where fwk g is a sequence of white Gaussian noise samples. Maximum likelihood sequence
detection represents the optimum strategy for decoding a sequence transmitted over a dis-
persive noisy channel. The decision rule consists in determining the sequence faO k g closest
to the received sequence in terms of Euclidean distance (see (8.190)) in the set S of all
possible code symbol sequences. MLSD is efficiently implemented by the Viterbi algorithm,
provided that the generation of the code symbol sequences follows the rules of a finite-state
machine.
In relation to (8.194), we define as free distance, dfree , the minimum Euclidean distance
between two code symbol sequences fÞk g and fþk g, that belong to the set S, given by
X
2
dfree D min jÞk þk j2 fÞk g; fþk g 2 S (12.2)
fÞk g6Dfþk g
k
The most probable error event is determined by two code symbol sequences of the set S at
the minimum distance. The assignment of symbol sequences using a code that is optimized
for Hamming distance does not guarantee an acceptable structure in terms of Euclidean dis-
tance, as in general the relation between the Hamming distance and the Euclidean distance
is not monotonic.
Encoder and modulator must then be jointly designed for the purpose of assigning to
symbol sequences waveforms that are separated in the Euclidean signal space by a distance
970 Chapter 12. Trellis coded modulation
equal to at least dfree , where dfree is greater than the minimum distance between the symbols
of an uncoded system.
At the receiver, the demodulator-decoder does not make errors if the received signal
in the Euclidean signal space is at a distance smaller than dfree =2 from the transmitted
sequence.
For an input vector bk and a state sk1 , the first equation describes the choice of the
transmitted symbol ak from a certain constellation, the second the choice of the next
state sk .
Interdependence between the symbols fak g is introduced without a reduction of the bit
rate by increasing the cardinality of the alphabet. For example, for a length K of the se-
quence of input vectors, if we change A of cardinality M with A0 ¦ A of cardinality M 0 >
M, and we select M K sequences as a subset of .A0 / K , a better separation of the code se-
quences in the Euclidean space may be obtained. Hence, we can obtain a minimum distance
dfree between any two sequences larger than the minimum distance between signals in A K .
Note that this operation may cause an increase in the average symbol energy from E s;u
for uncoded transmission to E s;c for coded transmission, and hence a loss in efficiency
given by E s;c =E s;u .
Furthermore, we define as Nfree the number of sequences that a code sequence has, on
average, at the distance dfree in the Euclidean multidimensional space.
Example
Suppose we want to transmit two bits of information per symbol. Instead of using QPSK
modulation, we can use the scheme illustrated in Figure 12.2.
The scheme has two parts. The first is a finite-state sequential machine with 8 states,
where the state sk is defined by the content of the memory cells sk D [sk.2/ ; sk.1/ ; sk.0/ ]. The
12.1. Linear TCM for one- and two-dimensional signal sets 971
Figure 12.2. Eight-state trellis encoder and bit mapper for the transmission of 2 bits per
modulation interval by 8-PSK.
two bits bk D [bk.2/ ; bk.1/ ] are input to the FSM, which undergoes a transition from state sk1
to one of four next possible states, sk , according to the function g. The second part is the
bit mapper, which maps the two information bits and one bit that depends on the state, i.e.
the three bits [bk.2/ ; bk.1/ ; sk1
.0/
], in one of the symbols of an eight-ary constellation according
to the function f , for example, an 8-PSK constellation using the map of Figure 12.5. Note
that the transmission of two information bits per modulation interval is achieved. Therefore
the constellation of the system is expanded by a factor 2 with respect to uncoded QPSK
transmission. Recall from the discussion in Section 6.10 that most of the achievable coding
gain for transmission over an ideal AWGN channel of two bits per modulation interval can
be obtained by doubling the cardinality of the constellation from four to eight symbols.
We will see that trellis coded modulation using the simple scheme of Figure 12.2 allows
to achieve a coding gain of 3.6 dB.
For the graphical representation of the functions f and g, it is convenient to use a trellis
diagram; the nodes of the trellis represent the FSM states and the branches represent the
possible transitions between states. For a given state sk1 , a branch is associated with each
possible vector bk by the function g, that reaches a next state sk . Each branch is labeled
with the corresponding value of the transmitted symbol ak . For the encoder of Figure 12.2
and the map of Figure 12.5, the corresponding trellis is shown in Figure 12.3, where the
trellis is terminated by forcing the state of the FSM to zero at the instant k D 4. For a
general representation of the trellis, see Figure 12.13.
Each path of the trellis corresponds to only one message sequence fb` g and is associated
with only one sequence of code symbols fak g. The optimum decoder searches the trellis for
the most probable path, given the received sequence fz k g is observed at the output of the
demodulator. This search is usually realized by the Viterbi algorithm (see Section 8.10).
Because of the presence of noise, the chosen path may not coincide with the correct one,
but diverge from it at the instant k D i and rejoin it at the instant k D i C L; in this case we
say that an error event of length L has occurred, as illustrated in the example in Figure 12.4
for an error event of length two (see Definition 8.1 on page 683).
Note that in a trellis diagram more branches may connect the same pair of nodes.
In this case we speak of parallel transitions, and by the term free distance of the code
972 Chapter 12. Trellis coded modulation
k= 0 1 2 3 4 5
s0 = 0 0
4
1 2 40
2 6 6
2
3 5
1 1
4 7 5
3
5 3
6 7
7
Figure 12.3. Trellis diagram for the encoder of Figure 12.2 and the map of Figure 12.5. Each
branch is labeled with the corresponding value of ak .
ak = 4 1 6 6 4
sk = 0
1
2
7
^a = 4 7 7 0 4
k
Figure 12.4. Section of the trellis for the decoder of an eight-state trellis code. The two
continuous lines indicate two possible paths relative to two 8-PSK signal sequences, fak g
and faˆk g.
we denote the minimum among the distances between symbols on parallel transitions
and the distances between code sequences associated with pairs of paths in the trel-
lis that originate from a common node and merge into a common node after L tran-
sitions, L > 1.
By utilizing the sequence of samples fz k g, the decoding of a TCM signal is done in two
phases. In the first phase, called subset decoding, within each subset of symbols assigned
12.1. Linear TCM for one- and two-dimensional signal sets 973
to the parallel transitions in the trellis diagram, the receiver determines the symbol closest
to the received sample; these symbols are then memorized together with their squared
distances from the received sample. In the second phase we apply the Viterbi algorithm
to find the code sequence faO k g along the trellis such that the sum of the squared distances
between the code sequence and the sequence fz k g is minimum. Recalling that the signal is
obtained at the output of the demodulator in the presence of additive white Gaussian noise
with variance ¦ I2 per dimension, the probability of an error event for large values of the
signal-to-noise ratio is approximated by (see (8.195))
dfree
Pe ' Nfree Q (12.4)
2¦ I
where dfree is defined in (12.2).
From the Definition 6.2 on page 508 and the relation (12.4) between Euclidean distance
and error probability, we give the definition of asymptotic coding gain, G code ;2 as the ratio
between the minimum distance, dfree , between code sequences and the minimum Euclidean
distance for uncoded sequences, equal to the minimum distance between symbols of the
constellation of an uncoded system, 1Q 0 , normalized by the ratio between the average energy
of the coded sequence, E s;c , and the average energy of the uncoded sequence, E s;u . The
coding gain is then expressed in dB as
2 =1
dfree Q2
0
G code D 10 log10 (12.5)
E s;c =E s;u
2 To emphasize the dependence of the asymptotic coding gain on the choice of the symbol constellations of the
coded and uncoded systems, sometimes the information on the considered modulation schemes is included as
a subscript in the symbol used to denote the coding gain, e.g. G 8PSK/4PSK for the introductory example.
974 Chapter 12. Trellis coded modulation
the procedure, it is required that the minimum Euclidean distance at the q-th level of
partitioning,
is maximum. At the n-th level of partitioning the subsets An .`/ consist of only one element
each; to subsets with only one element we assign the minimum distance 1n D 1; at the
end of the procedure we obtain a tree diagram of binary partitioning for the symbol set. At
the q-th level of partitioning, to the two subsets obtained by a subset at the .q 1/-th level
we assign the binary symbols y .q1/ D 0 and y .q1/ D 1, respectively; in this manner, an
n-tuple of binary symbols yi D .yi.n1/ ; : : : ; yi.1/ ; yi.0/ / is associated with each element Þi
found at an end node of the tree diagram.3
Therefore the Euclidean distance between two elements of A, Þi and Þm , indicated by
the binary vectors yi and ym that are equal in the first q components, satisfies the relation
. p/ . p/
jÞi Þm j ½ 1q for yi D ym p D 0; : : : ; q 1 i 6D m (12.7)
In fact, because of the equality of the components in the positions from .0/ up to .q 1/,
we have that the two elements are in the same subset Aq .`/ at the q-th level of partitioning.
Therefore their Euclidean distance is at least equal to 1q .
Example 12.1.2
The partitioning of the set A0 of symbols with statistical power E[jak j2 ] D 1 for an 8-PSK
system is illustrated in Figure 12.5. The minimum Euclidean distance between elements of
the set A0 is given by 10 D 2 sin.³=8/ D 0:765. At the first level of partitioning the two
subsets B0 D f.y .2/ ; y .1/ ; 0/; y .i / D 0; 1g and B1 D f.y .2/ ; y .1/ ; 1/;py .i / D 0; 1g are found,
with four elements each and minimum Euclidean distance 11 D 2. At the second level
of partitioning four subsets C0 D f.y .2/ ; 0; 0/; y .2/ D 0; 1g, C2 D f.y .2/ ; 1; 0/; y .2/ D 0; 1g,
C1 D f.y .2/ ; 0; 1/; y .2/ D 0; 1g, and C3 D f.y .2/ ; 1; 1/; y .2/ D 0; 1g are found with two
elements each and minimum Euclidean distance 12 D 2. Finally, at the last level eight
subsets D0 ; : : : ; D7 are found, with one element each and minimum Euclidean distance
13 D 1.
Example 12.1.3
The partitioning of the set A0 of symbols with statistical power E[jak j2 ] D 1 for a 16-QAM
system is illustrated in Figure
p 12.6. The minimum Euclidean distance between the elements
of A0 is given by 10 D 2= 10 D 0:632. Note that at each successive partitioning level the
minimum
p Euclidean distance among the elements of a subset increases by a factor equal to
2. Therefore at the third level of partitioning the minimum Euclidean distance
p between
the elements of each of the subsets Di , i D 0; 1; : : : ; 7, is given by 13 D 810 .
3 For TCM encoders, the n-tuples of binary code symbols will be indicated by y D .y .n1/ ; : : : ; y .0/ / rather
than by the notation c employed in the previous chapter.
12.1. Linear TCM for one- and two-dimensional signal sets 975
Figure 12.5. Partitioning of the symbol set for an 8-PSK system. [From Ungerboeck (1982).
c 1982 IEEE.]
Figure 12.6. Partitioning of the symbol set for a 16-QAM system. [From Ungerboeck (1982).
c 1982 IEEE.]
12.1.3 Lattices
Several constellations and the relative partitioning can be effectively described by lattices;
furthermore, as we will see in the following sections, the formulation based on lattices is
particularly convenient in the discussion on multidimensional trellis codes.
976 Chapter 12. Trellis coded modulation
3
2
1
0 1 2 3 4
−1
(a)
2
1
0 1 2 3 4
(b)
In other words E8 is the set of eight-dimensional points whose components are all integers,
or all halves of odd integers, that sum to an even number. E8 is called the Gosset lattice.
We now discuss set partitioning with the aid of lattices. First we recall the properties of
subsets obtained by partitioning.
If the set A has a group structure with respect to a certain operation (see page 844),
the partitioning can be done so that the sequence of subsets A0 ; A1 .0/; : : : ; An .0/, with
Aq .0/ ² Aq1 .0/, form a chain of subgroups of A0 ; in this case the subsets Aq .`/,
` 2 f1; : : : ; 2q 1g, are called cosets of the subgroup Aq .0/ with respect to A0 (see
page 837), and are obtained from the subgroup Aq .0/ by translations. The distribution of
Euclidean distances between elements of a coset Aq .`/ is equal to the distribution of the
Euclidean distances between elements of the subgroup Aq .0/, as the “difference” between
two elements of a coset yields an element of the subgroup; in particular, for the minimum
Euclidean distance in subsets at a certain level of partitioning it holds
1q .`/ D 1q 8` 2 f0; : : : ; 2q 1g (12.10)
The lattice 3 in < D defined as in (12.8) has group structure with respect to the addition.
With a suitable translation and normalization, we obtain that the set A0 for PAM or QAM
is represented by a subset of Z or Z2 . To get a QAM constellation from Z2 , we define,
for example, the translated and normalized lattice Q D c.Z2 C f1=2; 1=2g/, where c is an
arbitrary scaling factor, generally chosen to normalize the statistical power of the symbols
to 1. Figure 12.8 illustrates how QAM constellations are obtained from Z2 .
If we apply binary partitioning to the set Z or Z2 , we still get infinite lattices in <
or <2 , in which the minimum Euclidean distance increases with respect to the original
lattice. Formally, we can assign the binary representations of the tree diagram obtained by
978 Chapter 12. Trellis coded modulation
u u u u u u u256QAM
u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u128QAM
u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u 64QAM
u u u u u u u u u
u u u u u u 32cross
u u u u u u u u u u
u u u u u u u 16QAM
u u u u u u u u u
u u u u u u QPSK
u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
u u u u u u u u u u u u u u u u
partitioning to the lattices; for transmission, a symbol is chosen as representative for each
lattice at an end node of the tree diagram.
Definition 12.1
The notation X=X0 denotes the set of subsets obtained from the decomposition of the group
X in the subgroup X0 and its cosets. The set X=X0 forms in turn a group, called the quotient
group of X with respect to X0 . It is called binary if the number of elements is a power
of two.
Let
QAM. The first subgroup in the binary partitioning chain of the two-dimensional p lattice
Z2 is a lattice that is obtained from Z2 by rotation of ³=4 and multiplication by 2. The
matrix of this linear transformation is
1 1
RD (12.16)
1 1
Successive subgroups in the binary partitioning chain are obtained by repeated application
of the linear transformation R,
Aq .0/ D Z2 Rq (12.17)
Let
iZ2 Rq D f.ik; im/Rq j k; m 2 Zg i 2N (12.18)
where N is the set of natural numbers, then the sequence
Z2 = Z2 R = Z2 R2 = Z2 R3 = Ð Ð Ð D Z2 = Z2 R = 2Z2 = 2Z2 R = : : : (12.19)
forms a binary partitioning chain of the lattice Z2 , with increasing minimum distances
given by
1q D 2q=2 (12.20)
This binary partitioning chain is illustrated in Figure 12.9.
The cosets Aq .`/ are obtained by translations of the subgroup Aq .0/ as in the one-
dimensional case (12.13), with
t.`/ D .i; m/ (12.21)
where, as it can be observed in Figure 12.10 for the case q D 2 with A2 .0/ D 2Z2 ,
² q
¦ ² q
¦ q
i 2 0; : : : ; 2 1
2 m 2 0; : : : ; 2 1
2 ` D 22 i C m q even
² qC1
¦ ² q1
¦ q1
i 2 0; : : : ; 2 2 1 m 2 0; : : : ; 2 2 1 `D2 2 i C m q odd
(12.22)
980 Chapter 12. Trellis coded modulation
Z2 Z2 R 2 Z2 2 Z2 R
M-PSK. In this case the set of symbols A0 D fe j2³.k=M/ j k 2 Zg on the unit circle of the
complex plane forms a group with respect to the multiplication. If the number of elements
M is a power of two, the sequence
n 2q þþ o
A0 = A1 .0/ = A2 .0/ = : : : = Alog2 M .0/ with Aq .0/ D e j2³ M k þk 2 Z (12.23)
forms a binary partitioning chain of the set A0 , with increasing minimum distances
8 q
< 2
2 sin ³ for 0 q < log2 M
1q D M (12.24)
:
1 for q D log2 M
The cosets of the subgroups Aq .0/ are given by
n ` þþ o
Aq .`/ D a e j2³ M þa 2 Aq .0/ ` 2 f0; : : : ; 2q 1g (12.25)
5 In this chapter encoding and bit mapping are jointly optimized and m represents the number of information bits
L b per modulation interval (see (6.93)). For example, for QAM the rate of the encoder-modulator is R I D m2 .
12.1. Linear TCM for one- and two-dimensional signal sets 981
of state transitions, the design of a code is completed by assigning symbols to the state
transitions, such that dfree is maximum. Following the indications of information theory
(see Section 6.10), the symbols are chosen from a redundant set A of 2mC1 elements.
High coding gains for the transmission of 2 bits per modulation interval by 8-PSK are
obtained by the codes represented in Figure 12.13, with trellis diagrams having 4, 8, and
16 states. For the heuristic design of these codes with moderate complexity we resort to
the following rules proposed by Ungerboeck:
1. all symbols of the set A0 must be assigned equally likely to the state transitions in
the trellis diagram, using criteria of regularity and symmetry;
Figure 12.12. Transmission of 2 bits per modulation interval using a two-state trellis code
and 8-PSK. For each state the values of the symbols assigned to the transitions that originate
from that state are indicated.
Figure 12.13. Trellis codes with 4, 8, and 16 states for transmission of 2 bits per modulation
interval by 8-PSK. [From Ungerboeck (1982). c 1982 IEEE.]
12.1. Linear TCM for one- and two-dimensional signal sets 983
2. to transitions that originate from the same state are assigned symbols of the subset
B0 , or symbols of the subset B1 ;
3. to transitions that merge to the same state are assigned symbols of the subset B0 , or
symbols of the subset B1 ;
4. to parallel transitions between two states we assign the symbols of one of the subsets
C0 , C1 , C2 , or C3 .
Rule 1 intuitively points to the fact that good trellis codes exhibit a regular structure.
Rules 2, 3, and 4 guarantee that the minimum Euclidean distance between code symbol
sequences that differ in one or more elements is at least twice the minimum Euclidean
distance between uncoded 4-PSK symbols, so that the coding gain is greater than or equal
to 3 dB, as we will see in the next examples.
Example 12.1.8 (Four-state trellis code for the transmission of 2 bit/s/Hz by 8-PSK)
Consider the code with four states represented in Figure 12.13. Between each pair of code
symbol sequences in the trellis diagram that diverge at a certain state and q
merge after more
than one transition, the Euclidean distance is greater than or equal to 121 C 120 C 121
D 2:141. For example, this distance exists between sequences in the trellis diagram labeled
by the symbols 0–0–0 and 2–1–2; on the other hand, the Euclidean distance between sym-
bols assigned to parallel transitions is equal to 12 D 2. Therefore the minimum Euclidean
distance between code symbol sequences is equal to 2; hence with a four-state trellis code
we obtain a gain equal to 3 dB over uncoded 4-PSK transmission. Note that, as the mini-
mum distance dfree is determined by parallel transitions, the sequence at minimum distance
from a transmitted sequence differs only by one element that corresponds to the transmitted
symbol rotated by 180Ž .
Figure 12.14. Encoder/bit-mapper for a 4-state trellis code for the transmission of 2 bits per
modulation interval by 8-PSK.
984 Chapter 12. Trellis coded modulation
Figure 12.15. Eight-state trellis code for the transmission of 3 bits per modulation interval by
16-QAM.
Consider now a 16-QAM system for the transmission of 3 bits per modulation inter-
val; in this case the reference system uses uncoded 8-PSK or 8-AM-PM, as illustrated in
Figure 12.15.
Example 12.1.9 (Eight-state trellis code for the transmission of 3 bit/s/Hz by 16-QAM)
The partitioning of a symbol set with unit statistical power for a 16-QAM system is shown
in Figure 12.6. For the assignment of symbols to the transitions on the trellis consider the
subsets of the symbol set A0 denoted by D0 ; D1 ; : : : ; D7 , that contain two pelements each.
The minimum Euclidean distance between the signals in A0 is 10 D 2= 10 D 0:632;
the minimum
p Euclidean distance between elements of a subset Di , i D 0; 1; : : : ; 7, is
13 D 810 .
In the 8-state trellis code illustrated in Figure 12.15, four transitions diverge from each
state and four merge to each state. To each transition one of the subsets Di , i D 0; 1; : : : ; 7,
is assigned; therefore a transition in the trellis corresponds to a pair of parallel transitions.
The assignment of subsets to the transitions satisfies Ungerboeck rules. The subsets D0 , D4 ,
D2 , D6 , or D1 , D5 , D3 , D7 are assigned to the four transitions from or to the same state.
In evaluating dfree , this choice guarantees a squared Euclidean distance equal to at least
2120 between sequences that diverge from a state and merge after L transitions, L > 1. The
squared distance between sequences that diverge from a state and merge after two transitions
is equal to 6120 . If two sequences diverge and merge again after three or more transitions,
at least one intermediate transition contributes to an incremental squared Euclidean distance
equal to 120 ; thus the minimum Euclidean distance between code symbol sequences that
p
do not differ only for one symbol is given by 510p . As the Euclidean distance between
symbols assigned
p to parallel transitions is equal to 810 , the free distance of the code
is dfree D 510 . Because the minimum Euclidean distance for an uncoded 8-AM-PM
p
p average symbol energy is 10 D 210 , the coding gain is
reference system with the same Q
G 16QAM/8AM-PM D 20 log10 f 5=2g D 4 dB.
12.1. Linear TCM for one- and two-dimensional signal sets 985
In the trellis diagram of Figure 12.15 four paths are shown that represent error events
at minimum distance from the code sequence, having symbols taken from the subsets
D0 D0 D3 D6 ; the sequences in error diverge from the same state and merge after
three or four transitions. It can be shown that for each code sequence and for each state there
are two paths leading to error events of length three and two of length four. The number of
code sequences at the minimum distance depends on the code sequence being considered,
hence Nfree represents a mean value. The proof is simple in the case of uncoded 16-QAM,
where the number of symbols at the minimum distance from a symbol that is found at the
center of the constellation is larger than the number of symbols at the minimum distance
from a symbol found at the edge of the constellation; in this case we obtain Nfree D 3.
For the eight-state trellis code of Figure 12.15, we get Nfree D 3:75. For constellations of
type Z2 with a number of signals that tends to infinity, the limit is Nfree D 4 for uncoded
modulation, and Nfree D 16 for coded modulation with an eight-state code.
Let yk D [yk.m/ ; : : : ; yk.1/ ; yk.0/ ] be the .m C 1/-dimensional binary vector at the input of
the bit-mapper at the k-th instant, then the selected symbol is expressed as ak D a[yk ]. Note
that in the trellis the symbols of each subset are associated with 2mmQ parallel transitions.
The free Euclidean distance of a trellis code, given by (12.2), can be expressed as
dfree D minf1mC1
Q ; dfree .m/g
Q (12.26)
where 1mC1Q is the minimum distance between symbols assigned to parallel transitions and
dfree .m/
Q denotes the minimum distance between code sequences that differ in more than
one symbol. In the particular case mQ D m, each subset has only one element and therefore
there are no parallel transitions; this occurs, for example, for the encoder/ bit-mapper for
an 8-state code illustrated in Figure 12.2.
From Figure 12.16, observe that the vector sequence fQyk g is the output sequence of a
convolutional encoder. Recalling (11.263), for a convolutional code with rate m=. Q mQ C 1/
and constraint length ¹, we have the following constraints on the bits of the sequence fQyk g:
X
mQ
.i /
h .i¹ / yk¹ ý h .i¹1
/ .i /
yk¹C1 ý Ð Ð Ð ý h .i0 / yk.i / D 0 8k (12.27)
i D0
where fh .ij / g, 0 j ¹, 0 i m,
Q are the parity check binary coefficients of the encoder.
For an encoder having ¹ binary memory cells, a trellis diagram is generated with 2¹ states.
Note that (12.27) defines only the constraints on the code bits, but not the input/output
relation of the encoder.
Using polynomial notation, for the binary vector sequence y.D/ (12.27) becomes
[y .m/ .D/; : : : ; y .1/ .D/; y .0/ .D/] [h .m/ .D/; : : : ; h .1/ .D/; h .0/ .D/]T D 0 (12.28)
6 Note that (12.29) is analogous to (11.304); here the parity check coefficients are used.
12.1. Linear TCM for one- and two-dimensional signal sets 987
b (m)
k y (m)
n
b (1)
k y (1)
k
h (1)
ν−1 h (m)
ν−1
h (1)
2
h (m)
2
h (1)
1
h 1(m)
T T T T y (0)
k
h (0)
ν−1 h (0)
2
h (0)
1
Figure 12.17. Block diagram of a systematic convolutional encoder with feedback. [From
c 1982 IEEE.]
Ungerboeck (1982).
The implementation of a systematic encoder with feedback having ¹ binary memory ele-
ments is illustrated in Figure 12.17.
Computation of dfree
Consider the two code sequences
.m/ .0/
related by y2 .D/ D y1 .D/ ý e.D/, as the code is linear. Let ek D [ek ; : : : ; ek ], then the
error sequence e.D/ is given by
where ei ; ei CL 6D 0, and L > 0. Note that e.D/ is also a valid code sequence as the code is
linear. To find a lower bound on the Euclidean distance between the code symbol sequences
a1 .D/ D a[y1 .D/] and a2 .D/ D a[y2 .D/] obtained from y1 .D/ and y2 .D/, we define the
function d[ek ] D minzk d.a[zk ]; a[zk ý ek ]/, where minimization takes place in the space
of the binary vectors zk D [z k.m/ ; : : : ; z k.1/ ; z k.0/ ]T , and d.Ð;Ð/ denotes the Euclidean distance
between specified symbols. For the squared distance between the sequences a1 .D/ and
a2 .D/ then the relation holds
X
i CL X
i CL
d 2 .a[yk ]; a[yk ý ek ]/ ½ d 2 [ek ] D d 2 [e.D/] (12.34)
kDi kDi
Theorem 12.1
For each sequence e.D/ there exists a pair of symbol sequences a1 .D/ and a2 .D/ for which
relation (12.34) is satisfied with the equal sign.
Proof. Due to the symmetry in the subsets of symbols obtained by partitioning, d[ek ] D
minzk d.a[zk ]; a[zk ý ek ]/ can arbitrarily be obtained by letting the component z k.0/ of
the vector zk equal to 0 or to 1 and performing the minimization only with respect to
.m/ .1/
components [z k ; : : : ; z k ]. As encoding does not impose any constraint on the component
sequence [yk.m/ ; : : : ; yk.1/ ],7 for each sequence e.D/ a code sequence y.D/ exists such that
the relation (12.34) is satisfied as equality for every value of the index k.
The free Euclidean distance between code symbol sequences can therefore be determined
by a method similar to that used to find the free Hamming distance between binary code
sequences y.D/ (see (11.266)). We need to find an efficient algorithm to examine all possible
error sequences e.D/ (12.33) and to substitute the squared Euclidean distance d 2 [ek ] to the
Hamming weight of ek ; thus
X
i CL
2
dfree .m/
Q D min d 2 [ek ] (12.35)
e.D/Dy2 .D/y1 .D/6D0
kDi
Let q.ek / be the number of consecutive components equal to zero in the vector ek ,
starting with component ek.0/ . For example, if ek D [ek.m/ ; : : : ; ek.3/ ; 1; 0; 0]T , then q.ek / D 2.
From the definition of the indices assigned to symbols by partitioning, we obtain that
d[ek ] ½ 1q.ek / ; moreover this relation is satisfied as equality for almost all vectors ek . Note
that d[0] D 1q.0/ D 0. Therefore
X
i CL
2
dfree .m/
Q ½ min 1q.e
2
k/
D 12free .m/
Q (12.36)
e.D/6D0
kDi
If we assume dfree .m/ Q D 1free .m/,
Q the risk of committing an error in evaluating the free
distance of the code is low, as the minimum is usually reached by more than one error
sequence. By the definition of free distance in terms of the minimum distances between
elements of the subsets of the symbol set, the computation of dfree .m/ Q will be independent
of the particular assignment of the symbols to the binary vectors with .m C 1/ components,
provided that the values of the minimum distances among elements of the subsets are not
changed.
At this point it is possible to identify a further important consequence of the constraint
(12.30) on the binary coefficients of the systematic convolutional encoder. We can show
that an error sequence e.D/ begins with ei D .ei.m/ ; : : : ; ei.1/ ; 0/ and ends with ei CL D
.ei.m/ .1/
CL ; : : : ; ei CL ; 0/. It is therefore guaranteed that all transitions that originate from the
same state and to the transitions that merge at the same state are assigned signals of the
subset B0 or those of the subset B1 . The squared Euclidean distance associated with an
error sequence is therefore greater than or equal to 2121 . The constraint on the parity check
7 From the parity equation (12.29) we observe that a code sequence fzk g can have arbitrary values for each
.m/ .1/
m-tuple [z k ; : : : ; z k ].
12.1. Linear TCM for one- and two-dimensional signal sets 989
coefficients allows us, however, to determine only a lower bound for dfree .m/.
Q For a given
sequence 10 11 Ð Ð Ð 1mC1 Q of minimum distances between elements of subsets
and a code with constraint length ¹, a convolutional code that yields the maximum value
of dfree .m/
Q is usually found by a computer program for code search. The search of the
.¹ 1/.mQ C 1/ parity check binary coefficients is performed by means such that the explicit
computation of dfree .m/
Q is often avoided.
Tables 12.1 and 12.2 report the optimum codes for TCM with symbols of the type
Z1 and Z2 , respectively [2]. For 8-PSK, the optimum codes are given in Table 12.3 [2].
2¹ mQ h1 2 =1
h0 dfree Q 2 G 4AM/2AM G 8AM/4AM
0 G code Nfree
.m D 1/ .m D 2/ .m ! 1/ .m ! 1/
4 1 2 5 9:0 2:55 3:31 3:52 4
8 1 04 13 10:0 3:01 3:77 3:97 4
16 1 04 23 11:0 3:42 4:18 4:39 8
32 1 10 45 13:0 4:15 4:91 5:11 12
64 1 024 103 14:0 4:47 5:23 5:44 36
128 1 126 235 16:0 5:05 5:81 6:02 66
c 1987
Table 12.2 Codes for two-dimensional modulation. [From Ungerboeck (1987).
IEEE.]
2¹ mQ h2 h1 h0 2 =1
dfree Q2
0 G 16QAM/8PSK G 32QAM/16QAM G 64QAM/32QAM G code Nfree
.m D 3/ .m D 4/ .m D 5/ .m ! 1/ .m ! 1/
c 1987 IEEE.]
Table 12.3 Codes for 8-PSK. [From Ungerboeck (1987).
2¹ mQ h2 h1 h0 2 =1
dfree Q2
0 G 8PSK/4PSK Nfree
.m D 2/
4 1 — 2 5 4:0Ł 3:01 1
8 2 04 02 11 4:586 3:60 2
16 2 16 04 23 5:172 4:13 ' 2:3
32 2 34 16 45 5:758 4:59 4
64 2 066 030 103 6:343 5:01 ' 5:3
128 2 122 054 277 6:586 5:17 ' 0:5
990 Chapter 12. Trellis coded modulation
Parity check coefficients are specified in octal notation; for example, the binary vector
[h 6.0/ ; : : : ; h 0.0/ ] D [1; 0; 0; 0; 1; 0; 1] is represented by h.0/ D 1058 . In the tables, an asterisk
next to the value dfree indicates that the free distance is determined by the parallel transitions,
that is dfree .m/ Q > 1mC1 Q .
Encoding
Starting with a constellation A0 of the type Z1 or Z2 in the one or two-dimensional
signal space, we consider multidimensional trellis codes where the signals to be assigned
to the transitions in the trellis diagram come from a constellation A0I , I > 2, in the
multidimensional space < I . In practice, if .m C 1/ binary output symbols of the finite
state encoder determine the assignment of modulation signals, it is possible to associate
with these binary symbols a sequence of ` modulation signals, each in the constellation
A0 , transmitted during ` modulation intervals, each of duration T ; this sequence can be
12.2. Multidimensional TCM 991
In the case of an I -dimensional lattice 3 that can be expressed as the Cartesian product of
` terms all equal to a lattice in the space < I =` , the partitioning of 3 can be derived from
the partitioning of the I =`-dimensional lattice.
B0 B0 B1 B1 B0 B1 B0 B1 (12.41)
does not yield any increase in the minimum Euclidean distance. Note that the four subsets
at the second level of partitioning differ from the Z4 lattice only by the position with respect
to the origin, direction, and scale; therefore the subsets at successive levels of partitioning
are obtained by iterating the same procedure described for the first two. Thus we have the
following partitioning chain
where R is the 2 ð 2 matrix given by (12.16). The partitioning of the Z4 lattice is illustrated
in Figure 12.18 [2].
Optimum codes for the multidimensional TCM are found in a similar manner to that
described for the one- and two-dimensional TCM codes. Codes and relative asymptotic
12.2. Multidimensional TCM 993
c 1987 IEEE.]
Table 12.4 Codes for four-dimensional modulation. [From Ungerboeck (1987).
2¹ mQ h4 h3 h2 h1 h0 2 =1
dfree Q2
0 G code Nfree
.m ! 1/ .m ! 1/
8 2 — — 04 02 11 4:0 4:52 88
16 2 — — 14 02 21 4:0 4:52 24
32 3 — 30 14 02 41 4:0 4:52 8
64 4 050 030 014 002 101 5:0 5:48 144
128 4 120 050 022 006 203 6:0 6:28 —
coding gains for the four-dimensional TCM, with respect to uncoded modulation with
signals of the type Z2 , are reported in Table 12.4 [2]. These gains are obtained for signal
sets with a large number of elements that, in the signal space, take up the same volume of
elements of the signal set used for uncoded modulation; then the comparison is made for
the same statistical power and the same peak power of the two-dimensional signals utilized
for uncoded modulation.
Decoding
The decoding of signals generated by multidimensional TCM is achieved by a sequence
of operations that is the inverse with respect to the encoding procedure described in the
previous subsection. The first stage of decoding consists in determining, for each modulation
interval, the Euclidean distance between the received sample and all M signals of the
constellation A0 of symbols and also, within each subset Aq .i/ of the constellation A0 ,
the signal aO k .i/ that has the minimum Euclidean distance from the received sample. The
second stage consists in the decoding, by a maximum likelihood decoder, of 2mC1 Q block
codes BmC1 .` 0 /. Due to the large number M ` of signals in the multidimensional space
Q
< I , in general the block codes have a number of elements such that the complexity of a
maximum likelihood decoder that should compute the metric for each element would result
excessive. The task of the decoder can be greatly simplified thanks to the method followed
994 Chapter 12. Trellis coded modulation
for the construction of block codes BmC1 Q .`0 /. Block codes are identified by the subsets
of the multidimensional constellation A I , which are expressed in terms of the Cartesian
0
B0 C0 C 04
B 04
B0 C2 C2
B1
C 44
C0
B1 C1
B1 C0
C 14
B 14
B0 C2 C3 C3
C1 C 54
B0 C1 C 24
C1
C3 C3
B0 C 64
B1 C3
C1
C2
B0 C 34
B1
C0 C0
B1 C2 C 74
Figure 12.19. Trellis diagram for the decoding of block codes obtained by the partitioning
the lattice A04 D Z4 into two, four, and eight subsets.
12.3. Rotationally invariant TCM schemes 995
for the decoding of the trellis code. For a code with efficiency equal to .log2 M 1=`/ bits
per modulation interval, the complexity expressed as the number of branches in the trellis
diagram per transmitted information bit is thus given by
`M C N R C 2¹CmQ
(12.43)
` log2 M 1
The number N R can be computed on the basis of the particular choice of the multidimen-
sional constellation partitioning chain; for example, in the case of partitioning of the lattice
Z4 into four or eight subsets, for the four-dimensional TCM we obtain N R D 20. Whereas
in the case of two-dimensional TCM the decoding complexity essentially lies in the imple-
mentation of the Viterbi algorithm to decode the trellis code, in the four-dimensional TCM
most of the complexity is due to the decoding of the block codes.
In multidimensional TCM it is common to find codes with a large number Nfree of error
events at the minimum Euclidean distance; this characteristic is due to the fact that in
dense multidimensional lattices the number of points at the minimum Euclidean distance
rapidly increases with the increase in the number of dimensions. In this case the minimum
Euclidean distance is not sufficient to completely characterize code performance, as the
difference between asymptotic coding gain and effective coding gain, for values of interest
of the error probability, cannot be neglected.
continue to belong to the code even after the largest possible number of phase rotations.
In this case, with differential encoding of the information symbols, the independence of
demodulation and decoding from the carrier phase recovered by the synchronization system
is guaranteed. A differential decoder is then applied to the sequence of binary symbols fbQ` g
at the output of the decoder for the trellis code to obtain the desired detection of the sequence
of information symbols fbO` g. Rotationally invariant trellis codes were initially proposed by
Wei [11, 12]. In general, the invariance to phase rotation is more easily obtained with
multidimensional TCM; in the case of two-dimensional TCM for PSK or QAM systems, it
is necessary to use non-linear codes in GF(2).
In the case of TCM for PSK systems, the invariance to phase rotation can be directly
obtained using PSK signals with differential encoding. The elements of the symbol sets
are not assigned to the binary vectors of the convolutional code but rather to the phase
differences relative to the previous symbols.
Bibliography
[1] G. Ungerboeck, “Channel coding with multilevel/phase signals”, IEEE Trans. on In-
formation Theory, vol. 28, pp. 55–67, Jan. 1982.
[2] G. Ungerboeck, “Trellis coded modulation with redundant signal sets. Part I and Part
II”, IEEE Communications Magazine, vol. 25, pp. 6–21, Feb. 1987.
[3] S. S. Pietrobon, R. H. Deng, A. Lafanechere, G. Ungerboeck, and D. J. Costello
Jr., “Trellis-coded multidimensional phase modulation”, IEEE Trans. on Information
Theory, vol. 36, pp. 63–89, Jan. 1990.
[4] E. Biglieri, D. Divsalar, P. J. McLane, and M. K. Simon, Introduction to trellis-coded
modulation with applications. New York: Macmillan Publishing Company, 1991.
[5] S. S. Pietrobon and D. J. Costello Jr., “Trellis coding with multidimensional QAM
signal sets”, IEEE Trans. on Information Theory, vol. 39, pp. 325–336, Mar. 1993.
[6] J. Huber, Trelliscodierung. Heidelberg, Germany: Springer-Verlag, 1992.
[7] C. Schlegel, Trellis coding. New York: IEEE Press, 1997.
[8] G. D. Forney, Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian
channels”, IEEE Trans. on Information Theory, vol. 44, pp. 2384–2415, Oct. 1998.
[9] G. D. Forney, Jr., “Convolutional codes I: algebraic structure”, IEEE Trans. on Infor-
mation Theory, vol. IT–16, pp. 720–738, Nov. 1970.
[10] J. K. Wolf, “Efficient maximum likelihood decoding of linear block codes using a
trellis”, IEEE Trans. on Information Theory, vol. IT–24, pp. 76–80, Jan. 1978.
12. Bibliography 997
Chapter 13
In this chapter we first extend the study of the capacity of ideal AWGN channels introduced
in Section 6.10 to the case of band limited dispersive channels with additive Gaussian noise.
We find that the optimum power spectral density of the transmitted signal is inversely
proportional to the spectral signal-to-noise ratio, and is determined by a water pouring
criterion. Then we discuss practical methods to approximate this capacity. In particular, we
consider OFDM or multicarrier modulation described in Chapter 9, as well as single-carrier
modulation in the form of CAP or QAM as described in Chapter 7, combined with joint
precoding and coding techniques.
The total capacity C[b=s] is obtained by summing the terms C[b=s];i , that is
X
N X
N ½
1 f Ps . f i / jGCh . f i /j2
C[b=s];i D 1 f log2 1 C (13.4)
i D1 i D1
1 f Pw . f i /
We now find the optimum power spectral density Ps . f / that maximizes the capacity
(13.5), under the constraint (13.1) and the condition Ps . f / ½ 0. Applying the method of
Lagrange multipliers, the optimum Ps . f / maximizes the integral
Z
flog2 [1 C Ps . f / 0Ch . f /] C ½ Ps . f /g d f (13.6)
B
where ½ is a Lagrange multiplier. Using the calculus of variations (see Appendix 8.A) we
find that the optimum PSD must satisfy the following condition:
0Ch . f /
C ½ ln 2 D 0 (13.7)
1 C Ps . f / 0Ch . f /
where B D f f 2 [0; C1/: Ps;opt . f / > 0g is the passband that allows achieving the
capacity in (13.5), and K is a constant such that (13.1) is satisfied with the equal sign.
This result is due to Shannon, and is valid for non-ideal linear channels in the presence of
additive Gaussian noise. The function Ps;opt . f / is illustrated in Figure 13.1 for a typical
behavior of the function 0Ch . f /; as the channel impulse response is assumed real valued,
for f < 0 we get Ps;opt . f / D Ps;opt . f /.
In fact, it results that Ps . f / should assume large values (small values) at frequencies
for which 0Ch . f / assumes large values (small values). From Figure 13.1 we note that if
1= 0Ch . f / is the profile of a cup in which we pour a quantity of water equivalent to V P ,
the distribution of the water in the cup takes place according to the behavior predicted by
(13.8); this observation leads to the water pouring interpretation of the optimum distribution
of Ps . f / as a function of frequency.
13.1. Capacity of a dispersive channel 1001
∫
VP = Ps,opt (f) df
B
1
Ps,opt (f) =K
ΓCh (f)
K 1
ΓCh (f)
0 f f f
1 2
Figure 13.1. Illustration of Ps,opt .f/ for a typical behavior of the function 0Ch .f/.
Shannon also demonstrated that capacity is achieved if s.t/ is a Gaussian process with
power spectral density Ps;opt . f /; the capacity in bits per second is given by
Z
C[b=s] D log2 [1 C Ps;opt . f / 0Ch . f /] d f (13.9)
B
Note that 1 C 0eff corresponds to the geometric mean of the function 1 C Ps;opt . f /0Ch . f /
in the band B. By analogy with the case of an ideal AWGN channel with limited bandwidth
analyzed in Chapter 6, it is useful to define a normalized signal-to-noise ratio for a transmis-
sion system over a linear dispersive channel that operates at a rate of the encoder-modulator
equal to R I , in bits per dimension, as
0eff
0N eff D 2R
(13.12)
2 I 1
where 0N eff > 1 measures the gap that separates the system being considered from capacity.
The passband B that allows achieving capacity represents the most important parameter
of the spectrum obtained by water pouring. For many channels utilized in practice, the
passband B is composed of only one frequency interval [ f 1 ; f 2 ], as illustrated in Figure 13.1;
in other cases, for example, in the presence of high-power narrowband interference signals,
1002 Chapter 13. Precoding and coding techniques
B may be formed by the union of disjoint frequency intervals. In practice, we find that
the dependence of the capacity on Ps . f / is not as critical as the dependence on B; a
constant power spectral density in the band B usually allows a system to closely approach
capacity [1, 2]. Therefore the application of the water pouring criterion may be limited to
the determination of the passband.
In practice we resort to a technique called bit loading to determine the number of bits to
be transmitted over each subchannel per modulation interval, under the constraints: 1) b[i]
can take only a finite number of values determined by the signal constellations; and 2) the
total transmitted power is fixed [3, 4, 5, 6].
Note that the modulation interval increases as the number of sub-bands increases. To
reduce the delay in the recovery of the information, coding is usually applied “across the
subchannels”, see, for example, [7].
¦a2
jHT x . f /j2 D Ps;opt . f C f 0 / 1. f C f 0 / (13.18)
4T
At the receiver, the signal r.t/ is first demodulated and filtered by a filter with impulse
response g w that suppresses the signal components around 2 f 0 and whitens the noise.
1004
Figure 13.2. Equivalence between a continuous time system, (a) passband model, (b) baseband equivalent model with gw .t/ D p1 gw .t/,
2
and (c) a discrete-time system, for transmission over a linear dispersive channel.
Chapter 13. Precoding and coding techniques
13.2. Techniques to achieve capacity 1005
As an alternative we could use a passband whitening filter phase splitter that suppresses
the signal components with negative frequency and whitens the noise in the passband,
see Section 8.14.1.
Consider the baseband equivalent model of Figure 13.2b, where GC . f / D p1 GCh . f C
2
f 0 / 1. f C f 0 / and PwC . f / D 2Pw . f C f 0 / 1. f C f 0 /; we define
B0 D [ f 1 f 0 ; f 2 f 0 ] (13.19)
as the new passband of the desired signal at the receiver. The whitening filter gw .t/ D
p1 g w .t/ has frequency response given by
2
8
< 1
f 2 B0
jGw . f /j2 D PwC . f / (13.20)
:
0 elsewhere
From the scheme of Figure 8.20, the whitening filter is then followed by a matched filter
g M with frequency response
G MF
Rc . f / D HT x . f / GC . f / jGw . f /j
Ł Ł 2
jHT x . f / GC . f /j2
Q. f / D HT x . f / GC . f / G MF
Rc . f / D (13.23)
PwC . f /
Note that Q. f / has the properties of a PSD with passband B0 ; in particular, Q. f / is equal
to the noise PSD Pw R . f / at the MF output, as
jHT x . f / GC . f /j2
Pw R . f / D PwC . f / jG MF
Rc . f /j D
2
D Q. f / (13.24)
PwC . f /
Therefore the sequence of samples at the MF output can be expressed as
X
C1
xk D ai h ki C wQ k (13.25)
i D1
where the coefficients h i D q.i T / are given by the samples of the overall impulse response
q.t/, and
þ
þ
wQ k D w R .kT / D [wC .t 0 / Ł g MF
Rc .t 0
/].t/ þ (13.26)
tDkT
1006 Chapter 13. Precoding and coding techniques
In this case, because Q. f / is limited to the passband B0 with bandwidth B D 1=T , there
is no aliasing; the function H. f /, periodic of period 1=T , is therefore equal to .1=T /Q. f /
in the band B0 . As Pw R . f / D Q. f /, fwQ k g is a sequence of Gaussian noise samples with
autocorrelation sequence frwQ k .n/g D h n . Note, moreover, that fh i g satisfies the Hermitian
property, as H is real valued.
We have thus obtained a discrete-time equivalent channel that can be described using
the D transform as
H. f / D FQ Ł . f / f 02 FQ . f / (13.30)
where the function fQ.D/ D 1 C fQ1 D C Ð Ð Ð is associated with a causal ( fQi D 0 for i < 0),
monic and minimum-phase sequence fQi , and FQ . f / D fQ.e j2³ fT / is the Fourier transform
of the sequence f fQi g. The factor f 02 is the geometric mean of H. f / over an interval of
measure 1=T , that is
Z
log f 02 D T log H. f / d f (13.31)
1=T
where w.D/ is a sequence of i.i.d. Gaussian noise samples with variance 1= f 02 . Equation
(13.33) is obtained under the assumption that fQ.D/ has a stable reciprocal function, and
13.2. Techniques to achieve capacity 1007
hence fQŁ .1=D Ł / has an anticausal stable reciprocal function; this condition is verified if
h.D/ has no spectral zeros.
However, to obtain the reciprocal of fQ.D/ does not represent a problem, as z.D/ can be
indirectly obtained from the sequence of samples at the output of the WMF. The transfer
function of the composite filter that consists of the whitening filter gw and the WMF has a
transfer function given by
G MFRc . f / HŁT x . f / GCŁ . f / FQ . f /
GRc
WMF
.f/ D D (13.34)
f 02 FQ Ł . f / PwC . f / H. f /
The only condition for the stability of the filter (13.34) is given by the Paley–Wiener
criterion.
jHT x . f / GC . f /j2 4T 1
Q. f / D D 2 Ps;opt . f C f 0 / 0Ch . f C f 0 / 1. f C f 0 / (13.35)
PwC . f / ¦a 4
and capacity can be expressed as
Z
C[b=s] D log2 [1 C Ps;opt . f / 0Ch . f /] d f
B
Z
D log2 [1 C Ps;opt . f C f 0 / 0Ch . f C f 0 /] d f (13.36)
B0
Z ½
¦2
D log2 1 C a Q. f / d f
B0 T
Recall from (13.27) that H. f /, periodic of period 1=T , is equal to .1=T /Q. f / in the
band B0 ; therefore using (13.31) for B D 1=T , the capacity C [b=s] and its approximation
1008 Chapter 13. Precoding and coding techniques
Assume that the tail of the impulse response that causes ISI can be in some way elim-
inated, so that at the receiver we observe the sequence a.D/ C w.D/ rather than the
sequence (13.33). The signal-to-noise ratio of the resultant ideal AWGN channel becomes
0ISI free D ¦a2 f 02 ; thus, from (6.280) the capacity of the ISI-free channel and its approxi-
mation for large values of 0 become
Price was the first to observe that for large values of 0 we obtain (13.39), that is for high
signal-to-noise ratios the capacity C[b=s] of the linear dispersive channel is approximately
equal to the capacity of the ideal ISI-free channel obtained assuming that ISI can be in
some way eliminated from the sequence x.D/; in other words, ISI does not significantly
contribute to the capacity [1].
Recall that to achieve capacity it is necessary that the distribution of the transmitted
signal approximates a Gaussian distribution. We mention that commonly used methods of
shaping, which also minimize the transmitted power, require that the points of the input
constellation are uniformly distributed within a hypersphere [11]. Coding and shaping thus
occur in two Euclidean spaces related by a known linear transformation that preserves
the volume. Coding and shaping can be separately optimized, by choosing a method for
predistorting the signal set in conjunction with a coding scheme that leads to a large coding
gain for an ideal AWGN channel in the signal space where coding takes place, and a
method that leads to a large shaping gain in the signal space where shaping takes place. In
the remaining part of this chapter we focus on precoding and coding methods to achieve
large coding gains for transmission over channels with ISI, assuming the channel impulse
response is known.
1 The expression Þi D Þ j mod 30 denotes that the two symbols Þi and Þ j differ by a quantity that belongs
to 30 .
1010 Chapter 13. Precoding and coding techniques
The first precoding method was independently proposed for uncoded systems by Tom-
linson [15] and Harashima [16] (TH precoding). Initially, TH precoding was not used in
practice because in an uncoded transmission system the preferred method to cancel ISI
employs a DFE, as it does not require to send information on the channel impulse response
to the transmitter. However, if trellis coding is adopted, decision-feedback equalization is
no longer a very attractive solution, as reliable decisions are made available by the Viterbi
decoder only with a certain delay.
TH precoding, illustrated in Figure 13.3, uses memoryless operations at the transmitter
and at the receiver to obtain samples of both the transmitted sequence a . p/ .D/ and the
detected sequence a.D/
O within a finite region that contains A. In principle, TH precoding
can be applied to arbitrary symbol sets A; however, unless it is possible to define an
efficient extension of the region containing A, the advantages of TH precoding are reduced
by the increase of the transmit signal power (transmit power penalty). An efficient extension
exists only if the signal space of a . p/ .D/ can be “tiled”, that is completely covered without
overlapping with translated versions of a finite region containing A, given by the union
of the Voronoi regions of symbols of A, and defined as R.A/. Figure 13.4 illustrates the
efficient extension of a two-dimensional 16-QAM constellation, where 3T denotes the
sublattice of 30 that identifies the efficient extension.
With reference to Figure 13.3, the precoder computes the sequence of channel input
signal samples a . p/ .D/ as
a . p/ .D/ D a.D/ p.D/ C c.D/ (13.41)
where the sequence
p.D/ D [ fQ.D/ 1] a . p/ .D/ D D fQ0 .D/ a . p/ .D/ (13.42)
represents the ISI at the channel output that must be compensated at the transmitter. The
elements of the sequence c.D/ are points of the sublattice 3T used for the efficient extension
of the region R.A/ that contains A.2 The k-th element ck 2 3T of the sequence c.D/ is
. p/
chosen so that the statistical power of the channel input sample ak is minimum; in other
. p/
words, the element ck is chosen so that ak belongs to the region R D R.A/, as illustrated
in Figure 13.4. From (13.40), (13.41), and (13.42), the channel output sequence in the
discrete−time
bit−mapper precoder channel inverse
bit−mapper
(p) ^a(D) ^
b(D) a a(D) + + a (D) ^u(D) b(D)
b + u(D) + +
Σ Σ Σ Σ detector Σ a b
a A − + + + −
~ ~
Df ’ (D) Df ’ (D)
2 Equation (13.41) represents the extension of (7.198) to the general case, in which the operation mod M is
substituted by the addition of the sequence c.D/.
13.3. Precoding and coding for dispersive channels 1011
point of lattice
Λ T =2L Z 2
L ∆0
∆0
Voronoi region (Λ 0 )
point of lattice Λ 0
Figure
p 13.4. Illustration of the efficient extension of a two-dimensional 16-QAM constellation
(L D M D 4).
B0 B1 Λ 1 + offset
( ∆2 )
1
y (1)
Point of lattice Λ T
∆ m+1
~ =∆ 2
~
subset of A at partition level m+1=2
Figure 13.5. Partitioning of a 16-QAM constellation for trellis coding combined with TH
precoding.
13.3. Precoding and coding for dispersive channels 1013
the partitioning of a 16-QAM constellation for trellis coding combined with TH precoding;
note that summing a point of a lattice 3T (see Figure 13.4) with one of the points of the
subset obtained at the partitioning level mQ C 1 D 2, we still obtain a point of the subset.
In the general case of a one-dimensional constellation A with L points or a two-
dimensional constellation with L ð L points, with L even, the combination of trellis coding
with TH precoding requires the application of trellis coding with feedback, or trellis aug-
mented precoding, that we will now discuss [14]. Note that for L-PAM and L ð L-QAM
constellations, the existence of an efficient extension is immediately verified. Moreover, for
L even, we have that the subsets B0 ² 31 C ½01 and B1 ² 31 C ½11 , obtained at the first
level of partitioning, are congruent through a translation defined by a point of 3T , with
the sets of points that are again subsets of 31 C ½01 and 31 C ½11 , respectively. However,
this property is not necessarily verified for all partitioning levels up to level mQ C 1; an
example is given by subsets obtained at the partitioning level mQ C 1 D 3 of a 6 ð 6–QAM
constellation (see Figure 13.9).
A system using feedback to combine trellis coding with TH precoding is illustrated in
Figure 13.6. The state of the trellis code sk1 is known at the transmitter. The symbol
ak , into which the information represented by the binary vector bk is mapped, is taken
from the set B y .0/ , that is one of the two subsets obtained at the first level of partitioning.
k
The set B y .0/ is specified by the value yk.0/ D 0 or 1 of the element with index k of the
k
sequence y .0/ .D/, which is composed of the least significant bits of the vector sequence that
. p/
describes the evolution of the state of the trellis code. The output sample ak is determined
by (13.41).
As previously mentioned, to allow correct decoding operations the sequence u.D/ must
represent a valid code sequence or, in other words, at the instant k the symbol u k must
represent a valid continuation of the code sequence u.D/ starting from the code state sk1 .
The code sequence u.D/ is reproduced at the transmitter and presented at the input of a
unit that determines, using the knowledge of the state sk1 and the symbol u k , the next state
sk ; this unit determines the bit sequence y .0/ .D/, such that the elements of the sequence
a.D/ are chosen so that in turn u.D/ is a valid code sequence.
The code sequence u.D/, received in the presence of additive white Gaussian noise (see
(13.45)), is input to a Viterbi decoder that yields the detected symbol sequence u.D/. O A
~ ~
Df ’ (D) Df ’ (D)
p(D)
y (0)
feedback u(D)
TCM
encoding
detection a.D/
O of the sequence a.D/ is obtained by the memoryless operation (13.46), thus
O
avoiding error propagation. A detection b.D/ of the binary information vector sequence
b.D/ is then obtained by the inverse bit-mapping operation performed at the transmitter.
An example of application of trellis augmented precoding will be given next.
Example 13.3.1
Consider the transmission system with trellis augmented precoding illustrated in Figure 13.7.
The code is an 8 state trellis code and the symbol constellation A is a 6 ð 6-QAM con-
stellation. Assume that the channel frequency response exhibits spectral nulls at f D 0 and
f D 1=.2T / Hz, and is given by
1 D2
fQ.D/ D (13.48)
1 ²D
where 0 ² 1.
Figure 13.8a shows a conventional encoder for an 8-state trellis code that uses a sys-
tematic convolutional encoder for a code with rate 2/3 followed by a bit-mapper, and
Figure 13.8b illustrates the code trellis diagram (see Chapter 12). The two-dimensional
constellation A with 6 ð 6 points and the set partitioning that yields the signal subsets
assigned to the transitions on the trellis diagram are illustrated in Figure 13.9.
The mapping of the information bits b D .bk.5/ ; : : : ; bk.1/ / 2 f.00000/; .00001/; : : : ,
.10001/g, alphabet of cardinality 18, to symbols ak 2 B y .0/ , where yk.0/ 2 f0; 1g, is illustrated
k
in Figure 13.9, where the lattice 31 , which will be used in the following, is also shown.
In particular, we show in Figure 13.10 the representation of the binary vector yk D
.yk.2/ ; yk.1/ ; yk.0/ /, obtained using the set partitioning of Figure 13.9. Furthermore, we choose
for the symbols u k the representation given by
Ð
Uk D Uk;I ; Uk;Q 2 2 Z2 C .1; 1/ (13.49)
discrete−time
bit−mapper precoder inverse
channel bit−mapper
(p) ^a(D) ^
b(D) a a(D) + + a (D) u(D) + ^u(D) b(D)
b ~ Viterbi +
Σ Σ f (D) Σ Σ a b
(a B (0)) − + + decoder −
y (0)(D) y +
Σ
+ c(D) w(D) ^c(D)
ρ
D p(D) D ~ 2
y (1)(D) f(D)= 1−D
1−ρD
+ D
D Σ
y (2) (D) + −
ρ
Mu −> y
D D
next−state u(D)
computation
unit
bk(5)
select
bk(4) signal within
subset
bk(3)
y (2)
bk (2) k
ak
(1) y (1)
bk k select
subset
(2) (1) (0)
sk−1 sk−1 sk−1 =yk(0) D y (2) y (1)
4 k + 2 k +y (0)
k
(a)
d 2 = ∆ 21 + ∆ 20 + ∆ 21 =5 ∆ 20
state=4 sk(2) + 2s (1) +s k(0) free
k
coding gain: 10 log10 [ d 2 / ∆ 21 ] =4 dB
free
0: D0 D2 D4 D6
1: D1 D3 D5 D7
2: D2 D0 D6 D4
3: D3 D1 D7 D5
4: D4 D6 D0 D2
5: D5 D7 D1 D3
6: D6 D4 D2 D0
7: D7 D5 D3 D1
(b)
Figure 13.8. Illustration of (a) conventional encoder for an 8-state trellis code and (b) trellis
diagram.
that is
u k;I D 8u .3/ .2/ .1/
k;I C 4u k;I C 2u k;I C 1 C ck
(13.50)
u k;Q D 8u .3/ .2/ .1/
k;Q C 4u k;Q C 2u k;Q C 1 C ck
1016 Chapter 13. Precoding and coding techniques
u2 A
ȏńy y
(1) (0)
+b b b b ńb y
(5) (4) (3) (2) (1) (0)
ȏńy y
(1) (0)
y (0) + 0 y (0) + 1
B0 B1
Lattice L 1
y (1) + 0 y (1) + 1
C0 C2 C1 C3
ȏ+4
3 2
5 0 1
6 7 8
y (2) + 0 y (2) + 1
D0 D4 D2 D6 D1 D5 D3 D7
yk.1/ D u .1/
k;Q (13.51)
yk.2/ D u .2/
k;I ý u .1/
k;I ý u .2/
k;Q ý u .1/
k;Q
u Q , aQ
a.D/
O D u.D/
O c.D/
O (13.53)
Then the detected sequence of information bits is obtained from the sequence a.D/.
O
1018 Chapter 13. Precoding and coding techniques
point of lattice Λ T
L ∆ =12
0
point of lattice Λ0
Voronoi region ( Λ0 )
∆ 0 =2
a B0 a B1
Note that the assumption of perfectly known channel characteristics only holds in an
ideal situation. For example, if the considered method is applied to high speed data trans-
mission over UTP cables, we recall from Section 4.4.2 that low-frequency disturbances
and near-end cross-talk at high frequency are the main impairments. In this case it is not
practical to convey to the transmitter information about the channel. The overall system
must therefore be designed for worst-case channel characteristics, and deviations from the
assumed characteristics must be compensated at the receiver by adaptive equalization.
First version. In the first version, illustrated in Figure 13.12, the transmitted signal a . p/ .D/
can be expressed as
where p.D/ is given by (13.42) and c.D/ is obtained from the quantization of p.D/ with
quantizer Q 3mC1
Q
. The quantizer Q 3mC1
Q
yields the k-th element of the sequence c.D/ by
13.3. Precoding and coding for dispersive channels 1019
c(D) ^c(D)
quantizing the sample pk to the closest point of the lattice 3mC1Q , which corresponds to the
(mQ C 1)-th level of partitioning of the signal set (see Chapter 12); in the case of an uncoded
sequence, it is 3mC1Q D 30 . Note that the dither signal can be interpreted as the signal
with minimum amplitude that must be added to the sequence a.D/ to obtain a valid code
sequence at the channel output in the absence of noise. In fact, at the channel output we
get the sequence z.D/ D u.D/ C w.D/, where u.D/ is obtained by adding a sequence of
points taken from the lattice 3mC1Q to the code sequence a.D/, and therefore it represents
a valid code sequence.
The sequence z.D/ is input to a Viterbi decoder, which yields the detected sequence
u.D/.
O To obtain a detection of the sequence a.D/, it is first necessary to detect the sequence
c.D/
O of the lattice points 3mC1
Q added to the sequence a.D/. Observing
u.D/
O
aO . p/ .D/ D (13.56)
fQ.D/
and
p.D/
O D D fQ0 .D/ aO . p/ .D/ (13.57)
2. it is indispensable that the implementation of the blocks that perform similar functions
in the precoder and in the inverse precoder (see Figure 13.12) is identical with regard
to the binary representation of input and output signals;
3. as the dither signal can be assumed uniformly distributed in the Voronoi region
V .3mC1
Q / of the point of the lattice 3mC1
Q corresponding to the origin, the transmit
power penalty is equal to 12mC1
Q =12 per dimension; this can significantly reduce the
coding gain if the cardinality of the constellation A is small;
1020 Chapter 13. Precoding and coding techniques
4. to perform the inverse of the precoding operation, the inversion of the channel transfer
function is required (see (13.56)); if fQ.D/ is minimum phase then 1= fQ.D/ is stable
and the effect of an error event at the Viterbi decoder output vanishes after a certain
number of iterations; on the other hand, if fQ.D/ has zeros on the unit circle (spectral
nulls), error events at the Viterbi decoder output can result in an unlimited propagation
of errors in the detection of the sequence a.D/.
Second version. The second version of flexible precoding, illustrated in Figure 13.13,
includes trellis coding with feedback, as previously discussed. In the precoder and in the
inverse precoder the quantizer Q 31 is now used, that yields the element ck by quantizing the
sample pk to the closest point of the lattice 31 . Note that, if the symbol ak 2 B y .0/ , where
k
we recall that yk.0/ represents the least significant bit of the trellis code state vector sk1 ,
then we have u k D ak Cck 2 B y .0/ . In the coding method with feedback from the knowledge
k
of the state sk1 and the symbol u k , the next state sk is determined, and consequently also
.0/
the bit ykC1 ; at the channel output, in the absence of noise, we therefore obtain a valid code
sequence u.D/. Note that, as the dither signal is now uniformly distributed in the region
V .31 /, the transmit power penalty is reduced to 121 =12 per dimension (see Figure 13.9).
The problem of unlimited error propagation that occurs in flexible precoding if the
frequency response of the channel exhibits spectral nulls can be mitigated by referring to the
scheme illustrated in Figure 13.14. Consider the signals and regions for a two-dimensional
constellation A as illustrated, for example, in Figure 13.15 for a 16-QAM constellation;
assume that the transfer function of the channel, that can exhibit spectral nulls, has the form
1C D fQN .D/
fQ.D/ D (13.58)
1C D fQD .D/
To set a limit to error propagation, we exploit the knowledge that the expression of the
. p/
dither signal is such that the transmitted signal sample at instant k, ak , must be confined
within the region R y .0/ , obtained by the union of Voronoi regions of points of the subset
k
Figure 13.13. Block diagram of a system with flexible precoding and trellis coding with
feedback.
13.3. Precoding and coding for dispersive channels 1021
feedback
TCM
encoding
F
lim. ^ (p) ^ ~
~ + a (D) = u(D) f(D)
^u(D) to
Df 0 (D) − Σ− Ry (0)
^y (0) (D)
+ ~
Σ DfN (D)
+
Figure 13.14. Block diagram of a system with flexible precoding and trellis coding with
feedback that includes a method to mitigate error propagation in the inverse precoder.
a B0
A= B 0 B1
a B1
Figure 13.15. Illustration of signals and signal regions of a 16-QAM constellation for a system
with flexible precoding and mitigation of the error propagation in the inverse precoder.
1022 Chapter 13. Precoding and coding techniques
B y .0/ , defined as
k
. p/
R y .0/ D fak D Þ C d : Þ 2 B y .0/ ; d 2 V .31 /g yk.0/ D 0; 1 (13.59)
k k
In the precoder and in the inverse precoder, we use identical units denoted as F (see
Figures 13.14 and 13.15). Considering, for example, the transmitter we see that unit F
. p/
contains a non-linear element that limits the transmitted sequence sample ak to the region
R y .0/ , selected as R0 if yk.0/ D 0, or as R1 if yk.0/ D 1. Note that the output signal of the
k
non-linear element is input of the recursive section of a filter: this section is used for the
inversion of the numerator of fQ.D/, which determines the presence of spectral nulls in
the transfer function of the channel.
Example 13.3.2
Consider a dispersive AWGN channel with transfer function given by (13.48) with ² D 7=8,
and a QAM transmission system with a 6 ð 6 constellation for an 8-state trellis code. The
frequency response of the channel and the constellation A are illustrated in Figures 13.16
and 13.9, respectively. Recall that the frequency response of the channel exhibits spectral
nulls at f D 0 and f D 1=.2T / Hz.
Figure 13.17 illustrates the curves of symbol error probability as a function of the signal-
to-noise ratio 0 obtained by simulating systems that employ: a) TH precoding and trellis
coding with feedback; b) flexible precoding and trellis coding with feedback and limitation
of error propagation; c) uncoded 18-QAM and a receiver with DFE. Note that, for the
considered AWGN channel and a transmission of log2 .18/ bit/s/Hz with an 8-state trellis
code and 6 ð 6-QAM, the system a) offers a margin larger than 1 dB with respect to system
b), and a margin of 3 dB with respect to an uncoded system. This result can be explained
by recalling that the error propagation is completely eliminated with TH precoding, while
it can only be mitigated with flexible precoding.
10
0
Amplitude (dB)
–10
–20
–30
–40
0 1/2 1
fT
Figure 13.16. Frequency response with spectral nulls for f D 0 and f D 1=.2T/ Hz.
13.3. Precoding and coding for dispersive channels 1023
Figure 13.17. Symbol error probability as a function of the signal-to-noise ratio for systems
that use: (a) TH precoding combined with trellis coding with feedback, (b) flexible precoding
combined with trellis coding with feedback and limitation of error propagation, (c) uncoded
18-QAM and receiver with DFE.
Figure 13.18. Block diagram of a system with flexible precoding and trellis coding with
feedback, and a transmit power penalty equal to 120 =12 per dimension.
A further reduction of the transmit power penalty in the flexible precoding and trellis
coding with feedback can be obtained by using the scheme shown in Figure 13.18, where the
quantizer Q 30 yields the element ck by quantizing the sample pk to the closest point of the
lattice 30 . Note that the dither signal is now uniformly distributed in V .30 / and therefore
the transmit power penalty is reduced to 120 =12 per dimension, that is the same value
obtained with TH precoding. Observe that we have ck 2 31 or ck 2 31 C .1; 0/. To obtain
a valid code sequence u.D/, we recall that it is necessary to obtain u k D ak C ck 2 B y .0/ .
k
1024 Chapter 13. Precoding and coding techniques
F
^ (p) ~
~ + lim. a (D) = ^u(D) f(D)
^u(D) to
Df 0 (D) − Σ− Ry (0)
^y (0) (D)
+ ~
Σ DfN (D)
+
Figure 13.19. Method of mitigating error propagation for the system of Figure 13.18.
a B0
A= B 0 B1
a B1
Figure 13.20. Illustration of signals and signal regions of a 16-QAM constellation for the
system of Figure 13.19.
13. Bibliography 1025
Bibliography
[1] R. Price, “Nonlinearly feedback-equalized PAM versus capacity for noisy filter chan-
nels”, Proc. 1972 Int. Conf. Comm., pp. 22.12–17, June 1972.
[2] I. Kalet, “The multitone channel”, IEEE Trans. on Communications, vol. 37, pp. 119–
124, Feb. 1989.
[3] J. A. C. Bingham, “Multicarrier modulation for data transmission: an idea whose time
has come”, IEEE Communications Magazine, vol. 28, pp. 5–14, May 1990.
[5] A. Leke and J. M. Cioffi, “A maximum rate loading algorithm for discrete multitone
systems”, in Proc. Globecom 1997, pp. 1514–1518, Nov. 1997.
[6] J. Campello, “Practical bit loading for DMT”, in Proc. IEEE International Conference
on Communications, Vancouver, Canada, pp. 801–805, June 1999.
R
[7] G. Cherubini, E. Eleftheriou, and S. Olcer, “Filtered multitone modulation for very-
high-speed digital subscriber lines”, IEEE Journal on Selected Areas in Communica-
tions, June 2002.
[9] G. Ungerboeck, “Channel coding with multilevel/phase signals”, IEEE Trans. on In-
formation Theory, vol. 28, pp. 55–67, Jan. 1982.
[10] G. D. Forney, Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian
channels”, IEEE Trans. on Information Theory, vol. 44, pp. 2384–2415, Oct. 1998.
1026 Chapter 13. Precoding and coding techniques
[11] M. V. Eyuboglu and G. D. Forney, Jr., “Trellis precoding: combined coding, precod-
ing and shaping for intersymbol interference channels”, IEEE Trans. on Information
Theory, vol. 38, pp. 301–314, Mar. 1992.
[12] R. Laroia, S. A. Tretter, and N. Farvardin, “A simple and effective precoding scheme
for noise whitening on intersymbol interference channels”, IEEE Trans. on Commu-
nications, vol. 41, pp. 1460–1463, Oct. 1993.
[13] R. Laroia, “Coding for intersymbol interference channels—Combined coding and pre-
coding”, IEEE Trans. on Information Theory, vol. 42, pp. 1053–1061, July 1996.
[14] G. Cherubini, S. Ölcer, and G. Ungerboeck, “Trellis precoding for channels with spec-
tral nulls”, in Proc. 1997 IEEE Int. Symposium on Information Theory, Ulm, Germany,
p. 464, June 1997.
[15] M. Tomlinson, “New automatic equalizer employing modulo arithmetic”, Electronics
Letters, vol. 7, pp. 138–139, Mar. 1971.
[16] H. Harashima and H. Miyakawa, “Matched transmission technique for channels with
intersymbol interference”, IEEE Trans. on Communications, vol. 20, pp. 774–780,
Aug. 1972.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 14
Synchronization
As a generalization of the receiver block diagram shown in Figure 7.5, the representation of
the analog front end for a passband QAM system is illustrated in Figure 14.1. The received
signal r.t/ is multiplied by a complex-valued carrier generated by a local oscillator, then
filtered by an anti-imaging filter, g AI , that extracts the image of the signal yielding the
I and Q components of the signal rC .t/. Often the function of the anti-imaging filter is
performed by other filters, however, here g AI is considered only as a model for the analysis
and is assumed to be non-distorting. The receive oscillator is independent of the transmit
oscillator; consequently carrier recovery must be performed using one of the following two
strategies.
1. The first consists in multiplexing, usually in the frequency domain, a special signal,
called a pilot signal, at the transmitter. This allows extracting the carrier at the receiver
and therefore synchronizing the receive oscillator in phase and frequency with the
transmit oscillator. If the pilot signal v.t/ consist of a non-modulated carrier, carrier
recovery is obtained by the phase-locked loop (PLL) described in Section 14.2.
2. The second consists in getting the carrier directly from the modulated signal; this
approach has the advantage that all the transmitted power is allocated for the trans-
mission of the signal carrying the desired information. Some structures that implement
this strategy are reported in Section 14.3.
In this chapter we will discuss methods for carrier phase and frequency recovery, as well
as algorithms to estimate the timing phase. To avoid ambiguity, we refer to the latter
as timing recovery algorithms, dropping the term phase. These algorithms are developed
for application in the PAM and QAM transmission systems of Chapter 7, and the spread
spectrum systems of Chapter 10. The problem of synchronization of OFDM systems was
mentioned in Section 9.7.
Therefore, if the carrier generated at the transmitter, as observed at the receiver (received
carrier) is given by expf j .2³ f 0 t C '0 /g,1 in general we have a reconstruction error of the
carrier phase ' P A .t/ given by
' P A .t/ D .2³ f 1 t C '1 / .2³ f 0 t C '0 / (14.2)
Let
D 2³. f 0 f 1 /
(14.3)
D .'0 '1 /
then (14.2) can be rewritten as
' P A .t/ D t (14.4)
With reference to the notation of Figure 7.12, observing that now the phase offset is not
included in the baseband equivalent channel impulse response, we have
1 .bb/
gC .t/ D p gCh .t/ e j arg GCh . f 0 / (14.5)
2 2
The resulting baseband equivalent scheme of a QAM system is given in Figure 14.2. We
assume that the anti-imaging filter frequency response is flat within the frequency interval
max
jfj B C (14.6)
2³
where B is the bandwidth of the signal sC and max is the maximum value of jj.
wC (t) ϕ (t)
e-j PA
Figure 14.2. Baseband equivalent model of the channel and analog front end for a QAM
system.
1 In this chapter '0 is given by the sum of the phase of the transmitted carrier and the channel phase at f D f 0 ,
equal to arg G Ch . f 0 /.
14.2. The phase-locked loop 1029
The received signal rC is affected by a frequency offset and a phase offset ; moreover,
the transmit filter h T x and the channel filter gC introduce a transmission delay t0 . To
simplify the analysis, this delay is assumed to be known with an error that is in the range
.T =2; T =2/. This coarse timing estimate can be obtained, for example, by a correlation
method with known input (see (7.269)). This corresponds to assuming the overall pulse qC
non-causal, with peak at the origin.
Once set t0 D "T , with j"j 1=2, the signal rC .t/ can be written as
X
C1
rC .t/ D e j .tC / ak qC .t kT "T / C wC' .t/ (14.7)
kD1
where
qC .t/ D .h T x Ł gC /.t C "T /
(14.8)
wC' .t/ D wC .t/ e j' P A .t/
Furthermore, the receiver clock is independent of the transmitter clock, consequently the
receiver clock period, that we denote as Tc , is different from the symbol period T at the
transmitter: we assume that the ratio F0 D T =Tc is in general a real number.
The synchronization process consists in recovering the carrier, in the presence of phase
offset and frequency offset , and the timing phase or time shift "T .
where the phase '0 .t/ is a slowly time-varying function with respect to !0 , or
þ þ
þ d'0 .t/ þ
þ þ
þ dt þ − !0 (14.10)
and we let
thus
and 'O0 .t/ then represents the estimate of the phase '0 .t/ obtained by the PLL.
We define the phase error as the difference between the instantaneous phases of the
signals v.t/ and vVCO .t/, that is
.t/ D .!0 t C '0 .t// .!1 t C '1 .t// D '0 .t/ 'O0 .t/ (14.14)
where K D denotes the phase detector gain; we observe that e.t/ is an odd function
of .t/; therefore the PD produces a signal having the same sign as the phase error,
at least for values of between ³ and C³ ;
ž a lowpass filter F.s/, called loop filter, whose output u.t/ is equal to
ž a voltage-controlled oscillator (VCO), which provides a periodic output signal vVCO .t/
whose phase 'O0 .t/ satisfies the relation
d 'O0 .t/
D K 0 u.t/ (14.17)
dt
called VCO control law, where K 0 denotes the VCO gain.
In practice, the PD is often implemented by a simple multiplier; then the signal e.t/
is proportional to the product of v.t/ and vVCO .t/. If K m denotes the multiplier gain and
we define
1
KD D 2 A1 A2 K m (14.18)
14.2. The phase-locked loop 1031
then we obtain
e.t/ D K m v.t/ vVCO .t/
The (14.22) represents the integro-differential equation that governs the dynamics of the
PLL. Later we will study this equation for particular expressions of the phase '0 .t/, and
only for the case .t/ ' 0, i.e. assuming the PLL is in the steady state or in the so-called
lock condition; the transient behavior, that is for the case .t/ 6D 0, is difficult to analyze
and we refer to [1] for further study.
Linear approximation
Assume that the phase error .t/ is small, or .t/ ' 0; then the following approxima-
tion holds
In this way the non-linear block K D sin.Ð/ of Figure 14.4 becomes a multiplier by the
constant K D , and the whole structure is linear, as illustrated in the simplified block diagram
of Figure 14.5.
We denote by P .s/ the Laplace transform of .t/; by taking the Laplace transform of
(14.24) and assuming 'O0 .0/ D 0 we obtain
8
O 0 .s/ K F.s/
H .s/ D D K D K D K0 (14.26)
80 .s/ s C K F.s/
Then from (14.26) we get the following two relations:
P .s/D80 .s/ 8
O 0 .s/ D [1 H .s/] 80 .s/ (14.27)
P .s/ 1
D (14.28)
80 .s/ 1 C [K F.s/=s]
We define as steady state error 1 the limit for t ! 1 of .t/; recalling the final value
theorem, and using (14.28), 1 can be computed as follows:
1
1 D lim .t/ D lim s P .s/ D lim s80 .s/ (14.29)
t!1 s!0 s!0 1 C K F.s/=s
We compute now the value of 1 for the three expressions of '0 .t/ given in Table 14.1
along with the corresponding Laplace transforms.
thus we obtain
1 D 0 () F.0/ 6D 0 (14.31)
Observe that (14.31) holds even if F.s/ D 1, i.e. in case the loop filter is absent.
If we choose
then 1 D 0.
1034 Chapter 14. Synchronization
If we use a loop filter of the type (14.33) with k D 1, i.e. with one pole at the origin,
then we obtain a steady state error 1 given by
!r
1 D 6D 0 (14.35)
K F1 .0/
As a general rule we can state that, in the presence of an input signal having Laplace
transform of the type s k with k ½ 1, to get a steady state error 1 D 0, a filter with
at least .k 1/ poles at the origin is needed.
The choice of the above elementary expressions of the phase '0 .t/ for the analysis is
justified by the fact that an arbitrary phase '0 .t/ can always be approximated by a Taylor
series expansion truncated to the second order, and therefore as a linear combination of the
considered functions.
Figure 14.7. Linearized PLL baseband model in the presence of additive noise.
1036 Chapter 14. Synchronization
where
H. f / D H . j2³ f / (14.45)
To obtain Pwe . f / we use (14.41). Assuming w I .t/ and w Q .t/ are uncorrelated white
random processes with autocorrelation
rw I .− / D rw Q .− / D N0 Ž.− / (14.46)
and using the property of the Dirac function Ž.− / f .t C − / D Ž.− / f .t/, the autocorrelation
of we .t/ turns out to be
D K w2 N0 Ž.− /
Pwe . f / D K w2 N0 (14.48)
Therefore using (14.18) and (14.38), from (14.44) we get the variance of the phase error,
given by
Z C1 Z
1 N0 C1
¦2 D 2
jH. f /j2
Pwe . f / d f D 2
jH. f /j2 d f (14.49)
1 K D A 1 1
From (1.139) we now define the equivalent noise bandwidth of the loop filter as
Z C1
jH. f /j2 d f
0
BL D (14.50)
jH.0/j2
Then (14.49) can be written as
2N0 B L
¦2 D (14.51)
A21
where A21 =2 is the statistical power of the desired input signal, and N0 B L D .N0 =2/2B L is
the input noise power evaluated over a bandwidth B L .
In Table 14.2 the expressions of B L for different choices of the loop filter F.s/ are given.
Table 14.2 Expressions of BL for different choices of the loop filter F.s/.
2 !n s C !n2
H .s/ D (14.54)
s 2 C 2 !n s C !n2
Once the expression of '0 .t/ is known, .t/ can be obtained by (14.27) and finding
the inverse transform of P .s/. The relation is simplified if in place of s we introduce the
normalized variable sQ D s=!n ; in this case we obtain
sQ 2
P .Qs / D 80 .Qs / (14.55)
sQ 2 C 2 sQ C 1
which depends only on the parameter .
In Figures 14.8, 14.9, and 14.10 we show the plots of .t/, with as a parameter, for
the three inputs of Table 14.1, respectively. Note that for the first two inputs .t/ converges
to zero, while for the third input it converges to a non-zero value (see (14.35)), because
F.s/ has only one pole at the origin, as can be seen from Table 14.2. We note that if .t/
is a phase step the speed of convergence increases with increasing , whereas if .t/ is a
frequency step the speed of convergence is maximum for D 1.
In Figure 14.11 the plot of B L is shown as a function of . As
1
B L D !n C (14.56)
2 8
1038 Chapter 14. Synchronization
0.8
0.6
0.4
φ (τ)
ϕs
0.2
ζ=5
0
ζ=2
ζ=1
−0.2
ζ=0.7
ζ=0.5
−0.4
0 2 4 6 8 10 12
τ=ωn t
Figure 14.8. Plots of .− / as a function of − D !n t, for a second-order loop filter with a phase
step input signal: '0 .t/ D 's 1.t/.
0.6
ζ=0.5
0.5
ζ=0.7
0.4
ζ=1
φ (τ) 0.3
ωs / ωn
ζ=2
0.2
ζ=5
0.1
−0.1
0 2 4 6 8 10 12
τ=ωn t
Figure 14.9. Plots of .− / as a function of − D !n t, for a second-order loop filter with a
frequency step input signal: '0 .t/ D !s t 1.t/.
14.2. The phase-locked loop 1039
1.4
1.2
ζ=0.5
1 ζ=0.7
ζ=1
0.8
φ (τ)
ω r / ωn2
0.6
ζ=2
0.4
ζ=5
0.2
0
0 2 4 6 8 10 12
τ=ωn t
Figure 14.10. Plots of .− / as a function of − D !n t, for a second-order loop filter with a
frequency ramp input signal: '0 .t/ D !r .t2 =2/ 1.t/.
3.5
2.5
2
BL
ωn
1.5
0.5
0
0 1 2 3 4 5 6
we note that B L has a minimum for D 0:5, and that for > 0:5 it increases as .1=2/ ;
the choice of is therefore critical and represents a trade-off between the variance of the
phase error and the speed of convergence.
For a detailed analysis of the second and third-order loops we refer to [1].
2 In practice it is sufficient that GC . f / is Hermitian in a small interval around f D 0 (see page 32).
14.3. Costas loop 1041
Figure 14.13. Baseband and passband components of the product of two generic passband
signals, y1 .t/ and y2 .t/, as a function of their complex envelopes.
3 .bb/
In PAM-SSB and PAM-VSB systems, .h N Ł a 2 /.t/ will also contain a quadrature component.
1042 Chapter 14. Synchronization
Thus we have obtained a sinusoidal signal with frequency 2 f 0 , phase 2'0 , and slowly
varying amplitude, function of the bandwidth of H N . f /.
The carrier can be reconstructed by passing the signal y.t/ through a limiter, which
eliminates the dependence on the amplitude, and then to a frequency divider that returns a
sinusoidal signal with frequency and phase equal to half those of the square wave.
In the case of a time-varying phase '0 .t/, the signal y.t/ can be sent to a PLL with a
VCO that operates at frequency 2 f 0 , and generates a reference signal equal to
The signal vVCO must then be sent to a frequency divider to obtain the desired carrier. The
block diagram of this structure is illustrated in Figure 14.14. Observe that the passband
filter H N . f / is substituted by the lowpass filter H L P F . f /, with H L P F .0/ D 1, inserted in
the feedback loop of the PLL; this structure is called the squarer/PLL.
An alternative tracking structure to the squarer/PLL, called a Costas loop, is shown in
Figure 14.15. In a Costas loop the signal e.t/ is obtained by multiplying the I and Q
components of the signal r.t/; the VCO directly operates at frequency f 0 , thus eliminating
the frequency divider, and generates the reconstructed carrier
p
vVCO .t/ D 2A cos.2³ f 0 t C 'O0 .t// (14.67)
By the equivalences of Figure 14.13 we find that the input of the loop filter is identical to
that of the squarer/PLL, and is given by
A 2
e.t/ D a .t/ sin[2.t/] (14.68)
4
where is the phase error (14.14).
We compute now the fourth power of sCh .t/; after a few steps we obtain
4
sCh .t/ D 1
8 Re[a 4 .t/ exp. j8³ f 0 t C j4'0 /]
C 3
8 ja.t/j4
rDsCw (14.73)
The quantities jjρjj2 and jjsjj2 are constants5 [3]; therefore (14.74) is proportional to the
likelihood
² ¦
2
L;";;a .z; e; o; α/ D exp Re[hρ; si] (14.75)
N0
Referring to the transmission of K symbols or to a sufficiently large observation interval
TK D K T , (14.75) can be written as
² Z ½¦
2
L;";;a .z; e; o; α/ D exp Re ².t/ sCŁ .t/ e j .otCz/ dt (14.76)
N 0 TK
Inserting the expression (14.72) of sC .t/, limited to the transmission of K symbols, in
(14.76) and interchanging the operations of summation and integration we obtain
( Z ½)
2 KX1
L;";;a .z; e; o; α/ D exp Re Þk e
Ł jz
².t/ e qC .t kT eT / dt
jot Ł
N0 kD0 TK
(14.77)
We introduce the matched filter6
g M .t/ D qCŁ .t/ (14.78)
and assume that the pulse .qC Ł g M /.t/ is a Nyquist pulse; therefore there exists a suitable
sampling phase for which ISI is avoided.
Finally, if we denote the integral in (14.77) by x.kT C eT; o/, that is
Let us now suppose that the optimum values of z; e; o that maximize (14.80), i.e. the
estimates of z; e; o have been determined in some manner.
5 Here we are not interested in the detection of a, as in the formulation of Section 8.10, but rather in the estimate
of the parameters , ", and ; in this case, if the observation is sufficiently long we can assume that jjsjj2 is
invariant with respect to the different parameters.
6 In this formulation, the filter g is anticausal; in practice a delay equal to the duration of q must be taken
M C
into account.
1046 Chapter 14. Synchronization
^
rC (t) rD (t) x(t) x(kT+ ^ε T ,Ω )
gM (t) a^ k
^ε T+kT
^
jΩ e- j ^θ
e- t
The structure of the optimum receiver derived from (14.80) is illustrated in Figure 14.18.
The signal rC .t/ is multiplied by expf j tg,
O where O is an estimate of , to remove the
frequency offset, then filtered by the matched filter g M .t/ and sampled at the sampling
instants kT C "O T , where "O is an estimate of ". The samples x.kT C "O T; / O are then
multiplied by expf j O g to remove the phase offset. Finally, the data detector decides on
the symbol aO k that, in the absence of ISI, maximizes the k-th term of the summation in
(14.80) evaluated for .z; e; o/ D . ;
O "O ; /:
O
O e j O ]
aO k D arg max Re[ÞkŁ x.kT C "O T; / (14.81)
Þk
The digital version of the scheme of Figure 14.18 is illustrated in Figure 14.19; it uses
an anti-aliasing filter and a sampler with period Tc such that (recall the sampling theorem
on page 30)
1 max
½BC (14.82)
2Tc 2³
Observation 14.1
To simplify the implementation of the digital receiver, the ratio F0 D T =Tc is chosen as an
integer; in this case, for F0 D 4 or 8, the interpolator filter may be omitted and the timing
after the matched filter has a precision of Tc D T =F0 . This approach is usually adopted in
radio systems for the transmission of packet data.
To conclude this section, we briefly discuss the algorithms for timing and carrier phase
recovery.
Timing recovery
Ideally, at the output of the anti-aliasing filter the received signal should be sampled at the
instants t D kT C "T ; however, there are two problems:
1. the value of " is not known;
2. the clock at the receiver allows sampling at multiples of Tc , not at multiples of T ,
and the ratio T =Tc is not necessarily a rational number.
Therefore time synchronization methods are usually composed of two basic functions.
Figure 14.19. Digital receiver for QAM systems. [From Meyr, Moeneclaey, and Fechtel (1998). Reproduced by permission of Wiley.]
1047
1048 Chapter 14. Synchronization
The quantity saw.O"kC1 "O k / is often substituted for .O"kC1 "O k /, where saw.x/ is the
saw-tooth function illustrated in Figure 14.20. Thus, the difference between two successive
estimates belongs to the interval [1=2; 1=2]; this choice reduces the effects that a wrong
estimate of " would have on the value of the pair .mk ; ¼k /.
Figure 14.21 illustrates the graphic representation of (14.84) in the ideal case "O D ".
The transmitter time scale, defined by multiples of T , is shifted by a constant quantity
equal to "T . The receiver time scale is defined by multiples of Tc . The fact that the ratio
T /Tc may be a non-rational number has two consequences: first, the time shift ¼k Tc is time
varying even if "T is a constant; second, the instants mk Tc form a non-uniform subset of
the receiver time axis, such that on average the considered samples are separated by an
interval T .
14.4. The optimum receiver 1049
a)
(k-1)T+ ε T kT+ ε T (k+1)T+ ε T (k+2)T+ ε T (k+3)T+ ε T (k+4)T+ ε T
b)
m k-1 Tc m k Tc m k+1 Ts m k+2 Tc m k+3 Tc m k+4 Tc lTc
Figure 14.21. (a) Transmitter time scale; (b) receiver time scale with Tc < T, for the ideal case
"ˆ D ". [From Meyr, Moeneclaey, and Fechtel (1998). Reproduced by permission of Wiley.]
With reference to Figure 14.19, to obtain the samples of the signal x.t/ at the instants
.mk C ¼k / Tc we can proceed as follows:
a) implement a digital interpolator filter that provides samples of the received signal at
the instants .n C ¼k / Tc , starting from samples at the instants nTc (see Section 1.A.5);
b) implement a downsampler that yields samples at the instants .mk C¼k / Tc D kT C "O T .
With regard to the digital interpolator filter, consider a signal r D .t/ with bandwidth
Br D 1=.2Tc /. From the sampling theorem, the signal r D .t/ can be reconstructed from its
samples r D .i Tc / using the relation
X
C1
t i Tc
r D .t/ D r D .i Tc / sinc (14.89)
i D1
Tc
This expression is valid for all t; in particular it is valid for t D t1 C ¼k Tc , thus yielding
the signal r D .t1 C ¼k Tc /. Sampling this signal at t1 D nTc we obtain
X
C1
r D .nTc C ¼k Tc / D r D .i Tc / sinc.n C ¼k i/ (14.90)
i D1
1050 Chapter 14. Synchronization
Observe that the second member of (14.90) is a discrete-time convolution; in fact, in-
troducing the interpolator filter with impulse response h I and parameter ¼k ,
In other words, to obtain from samples of r D .t/ at instants nTc the samples at nTc C ¼k Tc
we can use a filter with impulse response h I .i Tc ; ¼k /.7
With regard to the cascade of the matched filter g M .i Tc / and the decimator at instants
mk Tc , we point out that a more efficient solution is to implement a filter with input at
instants nTc that generates output samples only at instants mk Tc .
We conclude this section by recalling that, if after the matched filter g M , or directly in
place of g M , there is an equalizer filter c with input signal having sampling period equal
to Tc , the function of the filter h I is performed by the filter c itself (see Section 8.4).
1) Phase estimate. In the scheme of Figure 14.19, phase estimation is performed after the
matched filter, using samples with sampling period equal to the symbol period T . In this
scheme, timing recovery is implemented before phase recovery, and must operate in one
of the following modes:
a) with an arbitrary phase offset;
7 In practice, the filter impulse response h I .Ð; ¼k / must have a finite number N of coefficients. The choice of N
depends on the ratio T =Tc and on the desired precision; for example, for T =Tc D 2 and a normalized MSE,
given by
Z 1 Z 1=.2T /
J D 2T je j2³ f ¼ H I . f ; ¼/j2 d f d¼ (14.93)
0 1=.2T /
where
N
X =2
H I . f ; ¼/ D h I .i Tc ; ¼/ e j2³ f i Tc (14.94)
iD.N =2/C1
equal to 50 dB, it turns out N ' 5. Of course, more efficient interpolator filters than that defined by (14.91)
can be utilized.
14.5. Algorithms for timing and carrier phase recovery 1051
2) Phase rotation.
a) The samples x.kT C "O T; /O are multiplied by the complex signal exp. j .kT
O //
(see Figure 14.19); a possible residual frequency offset 1 can be corrected by a
time-varying phase given by .kT
O d
/ D O C kT 1.
b) The samples x.kT C "O T; /O e j O are input to the data detector, assuming .;
O "O / are
the true values of .; "/.
14.5.1 ML criterion
The expression of the likelihood is obtained from (14.80) assuming o D 0, that is
( )
2 KX1
L;";a .z; e; α/ D exp Re[Þk x k .e/ e ]
Ł jz
N0 kD0
(14.95)
KY
1 ² ¦
2
D exp Re[ÞkŁ x k .e/ e j z ]
kD0
N0
With the exception of some special cases, the above functions cannot be computed in close
form. Consequently we need to develop appropriate approximation techniques.
A first classification of synchronization algorithms is based on whether knowledge of
the data sequence is available or not; in this case we distinguish two classes:
1. decision-directed (DD) or data-aided (DA);
2. non-data aided (NDA).
If the data sequence is known, for example, by sending a training sequence a D α 0 during
the acquisition phase, we speak of data-aided algorithms. As the sequence a is known, in
the sum in the expression of the likelihood function only the term for α D α 0 remains.
For example, the joint estimate of .; "/ reduces to the maximization of the likelihood
prj;";a .ρ j z; e; α 0 /, and we get
On the other hand, whenever we use the detected sequence aO as if it were the true sequence
a, we speak of data-directed algorithms. If there is a high probability that aO D a, again in
the sum in the expression of the likelihood function only one term remains. Taking again
the joint estimate of .; "/ as an example, in (14.96) the sum reduces to
X
prj;";a .ρ j z; e; α/ P[a D α] ' p rj;";a .ρ j z; e; aO / (14.100)
α
Non-data aided algorithms apply instead, in an exact or approximate fashion, the aver-
aging operation with respect to the data sequence.
A second classification of synchronization algorithms is made based on the synchroniza-
tion parameters that must be eliminated; then we have four cases:
14.5. Algorithms for timing and carrier phase recovery 1053
prj" .ρ j e/ D prj;";a .ρ j ;
O e; aO / (14.104)
A further classification is based on the method for obtaining the timing and phase estimates:
1. feedforward (FF). The FF algorithms directly estimate the parameters .; "/ without
using signals that are modified by the estimates; this implies using signals before the
interpolator filter for timing recovery and before the phase rotator (see Figure 14.19)
for carrier phase recovery.
2. feedback (FB). The FB algorithms estimate the parameters .; "/ using also signals
that are modified by the estimates; in particular they yield an estimate of the errors
e D O and e" D "O ", which are then used to control the interpolator filter and
the phase rotator, respectively. In general, feedback structures are able to track slow
changes of parameters.
Next, we give a brief description of FB estimators, with emphasis on the fundamental
blocks and on input–output relations.
Feedback estimators
In Figure 14.22 the block diagrams of a FB phase (FB ) estimator and of a FB timing
(FB") estimator are illustrated. These schemes can be easily extended to the case of a
FB frequency offset estimator. The two schemes only differ in the first block. In the case
of the FB estimator, the first block is a phase rotator that, given the input signal s.kT /
(x.kT C "O T / in Figure 14.19), yields s.kT / exp. j Ok /, where Ok is the estimate of at
instant kT ; in the case of the FB" estimator, it is an interpolator filter that, given the input
signal s.kT / (r D .nTc / in Figure 14.19), returns s.kT C "O k T / (r D .kT C "O T / in Figure 14.19),
where "O k is the estimate of " at instant kT .
We analyze only the FB estimator, as the FB" estimator is similar. The error detector
block is the fundamental block and has the function of generating a signal e.kT /, called
error signal, whose mean value is written as
b) ^ε
k loop filter
u(kT)
NCO F(z)
where g .Ð/ is an odd function. Then the error signal e.kT / admits the following decom-
position
where .Ð/ is a disturbance term called loop noise; the loop filter F.z/ is a lowpass filter
that has two tasks: it regulates the speed of convergence of the estimator and mitigates
the effects of the loop noise. The loop filter output, u.kT /, is input to the numerically
controlled oscillator (NCO), that updates the phase estimate according to the following
recursive relation:
We note that in FB estimators the vector a represents the transmitted symbols from instant
0 up to the instant corresponding to the estimates Ok , "O k .
14.5. Algorithms for timing and carrier phase recovery 1055
+δT G L [ .]
x(t) +
Σ
kT+ ^ε k T -
−δT G L [ .]
^ε
k loop filter
u(kT) e(kT)
NCO F(z)
Early-late estimators
Early-late estimators constitute a subclass of FB estimators, where the error signal is
computed according to (14.109), and the derivative operation is approximated by a finite
difference [3], i.e., given a signal p.t/, its derivative is computed as follows:
The block diagram of Figure 14.22b is modified into that of Figure 14.23. Observe that
in the upper branch the signal x k .O"k / is anticipated of ŽT , while in the lower branch it is
delayed by ŽT : hence the name of early-late estimator.
For an M-PSK signal, with M > 2, we approximate ak as e j'k , where 'k is a uniform
r.v. on .³ ; ³ ]; then (14.112) becomes
1 Z C³
KY ² ¦
2 dvk
L;" .z; e/ D exp Re[e jvk x k .e/ e j z ] (14.113)
kD0 ³ N0 2³
If we use the definition of the Bessel function (4.216), (14.113) is independent of the phase
and we obtain
KY1
jx k .e/j
L" .e/ D I0 (14.114)
kD0
N0 =2
On the other hand, if we take the expectation of (14.95) only with respect to the phase
we obtain
KY
1
jx k .e/ ÞkŁ j
L";a .e; α/ D I0 (14.115)
kD0
N0 =2
We observe that, for M-PSK, L";a .e; α/ D L" .e/, as jÞk j is a constant, while this does not
occur for M-QAM.
To obtain estimates from the two likelihood functions just obtained, if the signal-to-noise
ratio 0 is sufficiently high, we utilize the fact that I0 .Ð/ can be approximated as
2
I0 . / ' 1 C for j j − 1 (14.116)
2
Taking the logarithm of the likelihood and eliminating non-relevant terms, we obtain the
following NDA estimator and DA estimator.
On the other hand, if 0 − 1, (14.112) can be approximated using a power series expansion
of the exponential function. Taking the logarithm of (14.112), using the hypothesis of i.i.d.
symbols, with E[an ] D 0, and eliminating non-relevant terms, we obtain the following
log-likelihood:
" #
X
K 1 X
K 1
`;" .z; e/ D E[ja n j2 ] jx k .e/j2 C Re E[an2 ] .x kŁ .e//2 e j2z (14.119)
kD0 kD0
14.5. Algorithms for timing and carrier phase recovery 1057
The block diagram of the joint estimator is shown in Figure 14.24, where P val-
ues of the time shift ", ".m/ , m D 1; : : : ; P, equally spaced in [1=2; 1=2] are consid-
ered; usually the resolution obtained with P D 8 or 10 is sufficient. For each time shift
".m/ , the log-likelihood (14.122) is computed and the value of ".m/ associated with the
largest value of the log-likelihood is selected as the timing estimate. Furthermore, we ob-
serve that in the generic branch m, filtering by the matched filter g M .i Tc C ".m/ T / and
sampling at the instants kT can be implemented by the cascade of an interpolator filter
h I .i Tc ; ¼.m/ / (where ¼.m/ depends on ".m/ ) and a filter g M .i Tc /, followed by a decimator
that provides samples at the instants mk Tc , as illustrated in Figure 14.19 and described in
Section 14.4.
Figure 14.24. NDA joint timing and phase (for E[a2n ] 6D 0) estimator. [From Meyr, Moeneclaey, and Fechtel (1998). Reproduced by
permission of Wiley.]
Chapter 14. Synchronization
14.5. Algorithms for timing and carrier phase recovery 1059
Now (14.123) is equal to the average of the cyclostationary process jx.kT C eT /j2 in the
interval [L; L]; defining
X
L
ci D ci.k/ (14.126)
kDL
it results [4] that only c0 and c1 have non-zero mean, and (14.123) can be written as
X
`" .e/ D c0 C 2Re[c1 e j2³ e ] C 2Re[ci e j2³i e ] (14.127)
ji j½2
| {z }
disturbance with zero mean
for each value of e
1
"O D arg c1 (14.128)
2³
However, the coefficient c1 is obtained by integration, which in general is hard to im-
plement in the digital domain; on the other hand, if the bandwidth of jx.lT /j2 satisfies the
relation
1 1
Bjxj2 D .1 C ²/ < (14.129)
T 2Tc
where ² is the roll-off factor of the matched filter, then c1 can be computed by DFT. Let
F0 D T =Tc , then we obtain
" #
X
L
1 FX0 1
c1 D jx.[k F0 C l] Tc /j2 e j .2³=F0 /l (14.130)
kDL
F0 lD0
L
Σ ( .)
Im(c 1 )
Σ
- + k=-L
2
| x [ (4k-1)T c]| | z [(4k-3)T c ] |
2
(4kTc )
arg (c1 )
^ε
| x(nTc )| 2
Tc Tc Tc
2π
1
-
(4kTc )
| x(4kTc ) | 2 | x [(4k-2)T c ]| 2
L
Σ ( .)
- Re(c1 )
Σ
+ k=-L
Figure 14.25. NDA timing estimator via spectral estimation for the case F0 D 4. [From Meyr,
Moeneclaey, and Fechtel (1998). Reproduced by permission of Wiley.]
Figure 14.26. Phase independent DA (DD) timing estimator. [From Meyr, Moeneclaey, and
Fechtel (1998). Reproduced by permission of Wiley.]
The block diagram of the estimator is shown in Figure 14.26; note that this algorithm can
only be used in the case phase recovery is carried out before timing recovery.
For a joint phase and timing estimator, from (14.95) we get
( " #)
2 X
K 1
L;" .z; e/ D exp Re aO kŁ x k .e/ e j z (14.134)
N0 kD0
14.5. Algorithms for timing and carrier phase recovery 1061
Defining
X
K 1
r.e/ D aO kŁ x k .e/ (14.135)
kD0
.;
O "O / D arg max Re[r.e/ e j z ]
z;e
(14.136)
D arg max jr.e/j Re[e j .zarg.r.e/// ]
z;e
Figure 14.27 illustrates the implementation of this second estimator; note that this scheme
is a particular case of (7.269).
For both estimators, estimation of the synchronization parameters is carried out every K
samples, according to the assumption of slow parameter variations made at the beginning
of the section.
a^ k*
x k (ε (1))
g (iTc +ε (1) ) Σ (.)
arg max r ( ε )
M k ^ε
kT r (ε 1 )
r AA (t)
^
nTc θ
xk (ε (P) ) arg r ( ^ε )
g (iTc +ε (P) )
M
Σ (.)
k r (ε P )
kT
Figure 14.27. DA (DD) joint phase and timing estimator. [From Meyr, Moeneclaey, and Fechtel
(1998). Reproduced by permission of Wiley.]
1062 Chapter 14. Synchronization
Observation 14.2
In the case the channel is not known, to implement the matched filter g M we need to
estimate the overall impulse response qC ; then the estimation of qC , for example, by one
of the methods presented in Appendix 3.A, and of timing can be performed jointly.
Let F0 D T =Tc and Q 0 D T =TQ be integers, with Q 0 ½ F0 . From the signal fr A A .q TQ /g,
obtained by oversampling r A A .t/ or by interpolation of fr A A .nTc /g, and the knowledge of
the training sequence fak g, k D 0; : : : ; L T S 1, the estimate of qC with sampling period
TQ , or equivalently the estimate of its Q 0 =F0 polyphase components with sampling period
Tc (see Observation 8.5), is made. Limiting the estimate to the more significant consecutive
samples around the peak, the determination of the timing phase with precision TQ coincides
with the selection of the polyphase component with the largest energy among the Q 0 =F0
polyphase components. This determines the optimum filter g M with sampling period Tc .
Typically for radio systems F0 D 2, and Q 0 D 4 or 8.
With reference to the scheme of Figure 14.22, if we suppose that the sum in (14.139) is
approximated by the filtering operation by the loop filter F.z/, the error signal e.kT / results
½
Ł @ j O
e.kT / D Re aO k x.kT C eT /jeDO"k e (14.140)
@e
The partial derivative of x.kT C eT / with respect to e can be carried out in the digital
domain by a differentiator filter with an ideal frequency response given by
1
Hd . f / D j2³ f jfj (14.141)
2Tc
In practice, if T =Tc ½ 2 it is simpler to implement a differentiator filter by a finite difference
filter having a symmetric impulse response given by
1
h d .i Tc / D .Ži C1 Ži 1 / (14.142)
2Tc
Figure 14.28 illustrates the block diagram of the estimator, where the compact notation
x.t/
P is used in place of .dx.t/=dt/; moreover, based on the analysis of Section 14.5.2, if
u.kT / is the loop filter output, the estimate of " is given by
where ¼" is a suitable constant. Applying (14.88) to the value of "O kC1 , we obtain the values
of ¼kC1 and mkC1 .
14.5. Algorithms for timing and carrier phase recovery
Figure 14.28. DD & D -FB timing estimator. [From Meyr, Moeneclaey, and Fechtel (1998). Reproduced by permission of Wiley.]
1063
1064 Chapter 14. Synchronization
Observe that, under the assumptions of Section 14.4, q R .t/ is a Nyquist pulse; moreover,
we assume that in the absence of channel distortion, q R .t/ is an even function.
Note that the signal (14.144) is an odd function of the estimation error e" for e" 2
.1; 1/, whereas the signal (14.145) is an odd function of e" only around e" D 0.
Under lock conditions, i.e. for e" ! 0, the two versions of the algorithm exhibit a similar
behavior. However, the type A algorithm gives better results than the type B algorithm in
transient conditions because the mean value of the error signal for the type B algorithm
is not symmetric. Moreover, the type A algorithm results effective also in the presence of
signal distortion.
The error signal for the type A algorithm is chosen equal to
where is a suitable constant whose value is discussed below. Assuming that aO k1 D ak1 ,
aO k D ak , and O D , from (14.71) and (14.79) for o D D 0 (14.146) can be written as
("
X
C1
e.kT / D Re Ł
ak1 ai q R .kT C e" T i T /
i D1
#
X
C1
akŁ ai q R ..k 1/T C e" T i T / (14.147)
i D1
)
Ł
C ak1 wQ k akŁ wQ k1
where wQ k is the decimated noise signal at the matched filter output. We define
For
1
D (14.151)
2.E[jak j2 ] jma j2 /
we obtain (14.144). Similarly, in the case of the type B algorithm the error signal assumes
the expression
1
[aO k1 ma ]Ł x k .O" / e j
O
e.kT / D (14.152)
.E[jak j2 ] jma j2 /
Figure 14.29 illustrates the block diagram of the direct section of the type A estimator.
The constant is included in the loop filter and is not explicitly shown.
xk ( ^ε) a^k
( . )*
^
e -j θ
a^k*
+ e(kT)
Σ Re[ .]
-
T
If we assume that the sum is carried out by the loop filter, the error signal is given by
e.kT / D Re[x.kT C "O k T / xP Ł .kT C "O k T /] (14.154)
and is maximized by
P K 1
e j D e j arg aO kŁ xk .O"/
O
kD0 (14.156)
Figure 14.30 illustrates the implementation of the estimator (14.156).
a^ k*
xk ( ^ε) K-1
e jθ
^
Σ a^ k* xk ( ^ε)
k=0
xk ( ^ε) K-1 j^
( .) M arg Σ ( xk ( ^ε) ) M e θM
k=0
We note that raising x k .O" / to the M-th power causes a phase ambiguity equal to a multiple
of .2³ /=M; in fact, if O is a solution to (14.160), also .O C 2³l=M/ for l D 0; : : : ; M 1,
are solutions. This ambiguity can be removed, for example, by differential encoding (see
Section 6.5.2). The estimator block diagram is illustrated in Figure 14.31.
pk ek uk ^p ^p
k+1 k
F(z) z -1
-
loop filter NCO
a)
~
xk ( ε^ )=ak e jθ+w k a^ k
b)
( .) * ( .) *
a^ k*
1/ .
pk
p^ k
PHLL
Figure 14.32. (a) PHLL; (b) DD & D"-FB phasor estimator. [From Meyr, Moeneclaey, and
Fechtel (1998). Reproduced by permission of Wiley.]
1068 Chapter 14. Synchronization
Hence, we can use a digital version of the PLL to implement the estimator. However, the
error signal (14.161) introduces a phase ambiguity; in fact it assumes the same value if we
substitute (Ok ³ ) for Ok . An alternative to the digital PLL is given by the phasor-locked
loop (PHLL), that provides an estimate of the phasor e j , rather than the estimate of ,
thus eliminating the ambiguity.
The block diagram of the PHLL is illustrated in Figure 14.32a; it is a feedback structure
with the phasor pk D e jk as input and the estimate pO k D e j k as output. The error signal
O
ek is obtained by subtracting the estimate pO k from pk ; then ek is input to the loop filter
F.z/ that yields the signal u k , which is used to update the phasor estimate according to the
recursive relation
Figure 14.32b illustrates the block diagram of a DD & D" phasor estimator that imple-
ments the PHLL. Observe that the input phasor pk is obtained by multiplying x k .O" / by aO kŁ
to remove the dependence on the data; the dashed block normalizes the estimate pO k in the
QAM case.
Figure 14.33. Receiver of Figure 14.19 with interpolator and matched filter interchanged.
[From Meyr, Moeneclaey, and Fechtel (1998). Reproduced by permission of Wiley.]
14.6. Algorithms for carrier frequency recovery 1069
Therefore in the schemes of Figure 14.19 and Figure 14.33 the frequency translator may
be moved after the decimator, together with the phase rotator.
Non-data aided
Suppose the receiver operates with a low signal-to-noise ratio 0; similarly to (14.123) the
log-likelihood for the joint estimate of ."; / in the observation interval [L T; L T ] is
given by
X
L
`"; .e; o/ D jx.kT C eT; o/j2 (14.167)
kDL
By expanding `"; .e; o/ in Fourier series and using the notation introduced in the previous
section we obtain
X
`"; .e; o/ D c0 C 2Re[c1 e j2³ e ] C 2Re[ci e j2³i e ] (14.168)
ji j½2
| {z }
disturbance
Now the mean value of c0 , E[c0 ], depends on o, but is independent of e and furthermore
is maximized for o D ;
O hence
O D arg max c0 (14.169)
o
As we did for the derivation of (14.131), starting with (14.169) and assuming the ratio
F0 D T =Tc is an integer, we obtain the following joint estimate of .; "/ [4]:
LX
F0 1
O D arg max jx.nTc ; o/j2
o
nDL F0
(14.170)
LX
F0 1
"O D arg jx.nTc ; o/j2 e j2³ n=F0
nDL F0
1070 Chapter 14. Synchronization
Ω (1)nTc
e -j
g (iTc )
M ( .) 2 Σn
r AA (t) ^)
x(nTc,Ω
arg max
to timing
nTc
and/or phase
estimator
Ω (M) nTc
e -j
g (iTc )
M ( .) 2 Σn
Figure 14.34. NDA frequency offset estimator. [From Meyr, Moeneclaey, and Fechtel (1998).
Reproduced by permission of Wiley.]
The implementation of the estimator is illustrated in Figure 14.34; observe that the signal
x.nTc ; o/ can be rewritten as
X
x.nTc ; o/ D r A A .i Tc / ejoiTc g M .nTc i Tc /
i
X
D e jonTc r A A .i Tc / ejo.i n/Tc g M .nTc i Tc / (14.171)
i
X . pb/
D e jonTc r A A .i Tc / g M .nTc i Tc ; o/
i
. pb/
g M .i Tc ; o/ D g M .i Tc / e joiTc (14.172)
we note that jx.nTc ; o/j D jxo .nTc /j, and hence in the m-th branch of Figure 14.34 the
cascade of the frequency translator and the filter can be substituted with a simple filter with
. pb/
impulse response g M .i Tc ; .m/ /.
14.6. Algorithms for carrier frequency recovery 1071
Observe that, as c0 ./ is independent of e, then also e.nTc / is independent of e. From the
first of (14.171), the partial derivative of x.nTc ; o/ with respect to o is given by
@ X
C1
x.nTc ; o/ D .jiTc / r A A .i Tc / ejoiTc g M .nTc i Tc / (14.175)
@o i D1
g F M .i Tc / D .jiTc / Ð g M .i Tc / (14.176)
Observe now that, if the signal r D .nTc / D r A A .nTc / ejonTc is input to the filter g F M .i Tc /,
then from (14.175) the output is given by
@
x F M .nTc / D g F M Ł r D .nTc / D x.nTc ; o/ C jnTc x.nTc ; o/ (14.177)
@o
from which we obtain
@
x.nTc ; o/ D x F M .nTc / jnTc x.nTc ; o/ (14.178)
@o
Therefore the expression of the error signal (14.174) becomes
e.nTc / D 2Re[x.nTc ;
O n / x FŁ M .nTc ;
O n /] (14.179)
The block diagram of the resultant estimator is shown in Figure 14.35. The loop filter
output u.kTc / is sent to the NCO that yields the frequency offset estimate according to the
recursive equation
O nC1 Tc D
O n Tc C ¼ u.nTc / (14.180)
X
K 1
` .o/ D jx.kT C "O T; o/j2 (14.181)
kD0
Proceeding as in the previous section, we get the block diagram illustrated in Figure 14.36.
1072 Chapter 14. Synchronization
matched filter
rAA (nTc ) ^ )
x(nTc ,Ω n
gM (iTc )
^ nTc
Ω
e-j n
frequency
matched filter
^ )
xFM (nTc ,Ω n
g (iTc ) ( .) * 2Re{ .}
FM
loop filter
u(nTc ) e(nTc )
NCO F(z)
Figure 14.35. NDA-ND"-FB frequency offset estimator. [From Meyr, Moeneclaey, and Fechtel
(1998). Reproduced by permission of Wiley.]
m k Ts
µk
matched filter interpolator decimator
rAA(nTc ) ^ )
x(nTc ,Ω
g (iTc ) n
h I (iTc ,µ k )
M
^
e-j Ωn nTc µk m k Tc
frequency
matched filter interpolator decimator
^ )
xFM (nTc ,Ω
g (iTc ) n
h I (iTc ,µ k ) ( .) *
FM
loop filter
NCO u(kT) e(kT)
Tc F(z) 2Re{ .}
T
Note that the joint estimate is computed by finding the maximum of a function of one
variable; in fact, defining
.KX
1/=2
r.o/ D Þ0;k
Ł
x k .O" / ejokT (14.184)
kD.K 1/=2
The maximum (14.183) is obtained by first finding the value of o that maximizes jr.o/j,
O D arg max jr.o/j (14.186)
o
and then finding the value of z for which the term within brackets in (14.185) becomes real
valued,
O D argfr./g
O (14.187)
We now want to solve (14.186) in close form; a necessary condition to get a maximum
is that the derivative of jr.o/j2 D r.o/ rŁ .o/ with respect to o is equal to zero for o D .
O
Defining
2 ½
K 1
bk D k.k C 1/ (14.188)
4
we obtain
( .KX
)
1/=2 Þ0;kC1
Ł
T
O D arg bk [x kC1 .O" / x kŁ .O" /] (14.189)
kD.K 1/=2
Þ0;k
Ł
O kC1 D
O k C ¼;1 e .kT / C ¼;2 e .kT / (14.192)
where e and e are estimates of the phase error and of the frequency error, respectively,
and ¼ , ¼;1 and ¼;2 are suitable constants. Typically (see Example 15.6.4 on page 1110)
p
¼;1 ' ¼;2 ' ¼ .
Observe that (14.191) and (14.192) form a digital version of the second-order analog
PLL illustrated in Figure 14.7.
In terms of the symbols fak g, we obtain the following alternative representation that will
be used next:
X
C1 k N S FX
CN S F 1
s.t/ D ak cm h T x .t mTchi p / (14.195)
kD1 mDk N S F
Optimum receiver
With the same assumptions of Section 14.1 and for the transmission of K symbols, the
received signal rC .t/ is expressed as
X
K 1 k N S FX
CN S F 1
rC .t/ D ak cm qC .t mTchi p "Tchi p / e j .tC / C wC' .t/ (14.196)
kD0 mDk N S F
The likelihood Lss D L;";;a .z; e; o; α/ can be computed as in (14.77); after a few steps
we obtain
( "
k N S FX
2 KX 1 CN S F 1
Lss D exp Re ÞkŁ e j z Ł
cm
N0 kD0 mDk N S F
#) (14.197)
Z
².t/ ejot g M .mTchi p C eTchi p t/ dt
TK
k N S FX
CN S F 1
(14.198)
y.kT; e; o/ D Ł
cm x.mTchi p C eTchi p ; o/
mDk N S F
X
l
y.lTchi p ; e; o/ D Ł
cm x.mTchi p C eTchi p ; o/ (14.200)
mDlN S F C1
^
m y(kT, ^ε , Ω ) a^ k
^
x(mTchip + ^ε Tchip , Ω ) Σ
i=m−N
NSF
SF +1
^
^ e−jθ (kT)
y(mTchip , ^ε , Ω )
cm*
phase
estimator
8 Note that now Tc ' Tchi p =2, and the sampling instants at the decimator are such that mTchi p C "O Tchi p D
mm Tc C ¼m Tc . The estimate "O is updated at every symbol period T D Tchi p Ð N S F .
14.8. Synchronization in spread spectrum systems 1077
The block diagram of the estimator is shown in Figure 14.39; note that the lag and
the lead equal to ŽTchi p are implemented by interpolator filters operating at the sampling
period Tc (see (14.91)) with parameter ¼ equal, respectively, to ŽQ and CŽ, Q where
QŽ D Ž.Tchi p =Tc /. The estimator is called a non-coherent digital DLL [8, 9], as the
dependence of the error signal on the pair .; a/ is eliminated without computing the
estimates.
@`" .e/ 1 KX
1
' Re[yk .e/.yk .e C Ž/ yk .e Ž//Ł ] (14.207)
@e Ž kD0
Assuming the loop filter performs the multiplication by 1=Ž and the sum, the error signal
is given by
The block diagram of the estimator is shown in Figure 14.40 and is called modified code
tracking loop (MCTL) [10]; also in this case the estimator is non-coherent.
X
K 1
Re[aO kŁ yk .e/ e j ]
O
`" .e/ D (14.209)
kD0
Approximating the derivative as in (14.110) and including both the multiplicative constant
and the summation in the loop filter, the error signal is given by
Figure 14.41 illustrates the block diagram of the estimator, which is called coherent DLL
[9, 10, 11], as the error signal is obtained by the estimates O and aO .
In the three schemes of Figure 14.39, 14.40, and 14.41 the direct section of the DLL
gives estimates of mm and ¼m at every symbol period T , whereas the feedback loop may
operate at the chip period Tchi p . Observe that by removing the decimation blocks the DLL
is able to provide timing estimates at every chip period.
1078 Chapter 14. Synchronization
Bibliography
Chapter 15
Self-training equalization
1 In this chapter, to simplify notation, often the argument of the z-transform is not explicitly indicated.
1084 Chapter 15. Self-training equalization
where H0 denotes the gain, P3 .z/ and P1 .z/ are monic polynomials with zeros inside
the unit circle, and P2 .z/ is a monic polynomial with zeros outside the unit circle. We
introduce the inverse functions with respect to the polynomials P1 .z/ and P2 .z/ given by
P11 .z/ D 1=P1 .z/ and P21 .z/ D 1=P2 .z/, respectively. By expanding P11 .z/ and P21 .z/
in Laurent series we obtain, apart from lag factors,
X
C1
P11 .z/ D c1;n z n
nD0
(15.2)
X
0
P21 .z/ D c2;n z n
nD1
both converging in a ring that includes the unit circle. Therefore we have
! !
1 X
C1 X0
H .z/ D
1
P3 .z/ c1;n z n
c2;n z n
(15.3)
H0 nD0 nD1
In general from (15.3) we note that, as the system is non-minimum phase, the inverse system
cannot be described by a finite number of parameters; in other words, the reconstruction of
a transmitted symbol at the equalizer output at a certain instant requires the knowledge of
the entire received sequence.
In practice, to obtain an implementable system, the series in (15.3) are truncated to N
terms, and an approximation of H 1 with a lag equal to .N1 1/ modulation intervals is
given by
! !
1 .N1 1/ X
N 1 X0
H .z/ '
1
z P3 .z/ c1;n z n
c2;n z n
(15.4)
H0 nD0 nD.N 1/
Therefore the inverse system can only be defined apart from a lag factor.
The problem of self-training equalization is formulated as follows: from the knowledge
of the probability distribution of the channel input symbols fak g and from the observation
of the channel output sequence fx k g, we want to find an equalizer C such that the overall
system impulse response is ISI free.
Observe that, if the channel H is minimum phase,2 both H and H 1 are causal and
stable; in this case the problem of channel identification, and hence the determination of
the sequence fak g, can be solved by whitening (see page 143) the observation fx k g using
known procedures that are based on the second-order statistical description of signals. If, as
it happens in the general case, the channel H is non-minimum phase, by the second-order
statistical description (1.263) it is possible to identify only the amplitude characteristic of
the channel transfer function, but not its phase characteristic.
In particular, if the probability density function of the input symbols is Gaussian, the
output fx k g is also Gaussian and the process is completely described by a second-order
analysis; therefore the above observations are valid and in general the problem of self-
training equalization cannot be solved for Gaussian processes. Note that we have referred
to a system model in which the sampling frequency is equal to the symbol rate; a solution
to the problem using a second-order description with reference to an oversampled model
is obtained in [8].
Furthermore, we observe that, as the probability density function of the input symbols
is symmetric, the sequence fak g has the same statistical description as the sequence
fak g; consequently it is not possible to distinguish the desired equalizer H 1 from the
equalizer H 1 . Hence, the inverse system can only be determined apart from the sign
and a lag factor. Therefore the solution to the problem of self-training equalization is given
by C D šH 1 , which yields the overall system 9 D šI , where I denotes the identity,
with the exception of a possible lag.
In this chapter we will refer to the following theorem, demonstrated in [9].
Theorem 15.1
Assuming that the probability density function of the input symbols is non-Gaussian, then
9 D šI if the output sample
X
C1
yk D cn x kn (15.5)
nD1
has a probability density function p yk .b/ equal to the probability density function of the
symbols fak g.
Therefore, using this theorem to obtain the solution, it is necessary to determine an algorithm
for the adaptation of the coefficients of the equalizer C such that the probability distribution
of yk converges to the distribution of ak .
We introduce the cost function
J D E[8.yk /] (15.6)
where yk is given by (15.5) and 8 is an even, real-valued function that must be chosen so
that the optimum solution, determined by
Copt .z/ D arg min J (15.7)
C.z/
Let
xk D [x k ; x k1 ; : : : ; x k.N 1/ ]T (15.9)
then (15.5) becomes
X
N 1
yk D cn;k x kn D ckT xk (15.10)
nD0
The function V characterizes the peak amplitude of the system output signal fyk g and the
constraint can be interpreted as a requirement that the solution f i g belongs to the sphere
with center the origin and radius r D 1 in the parameter space f i g.
Letting ψ D [: : : ; 0 ; 1 ; : : : ]T , it results in rψ V D [: : : ; sgn. 0 /; sgn. 1 /; : : : ]T ;
then, if max is the maximum value assumed by i , the cost function (15.14) presents
stationary points for i D š max . Now, taking into account the constraint (15.15), it is
easily seen that the minimum of (15.14) is reached only when one element of the sequence
f i g is different from zero (with a value equal to max ) and the others are all zeros. In
other words, the only points of minimum are given by 9 D šI , and the other stationary
points correspond to saddle points. Figure 15.2 illustrates the cost function V along the unit
circle for a system with two parameters.
15.1. Problem definition and fundamentals 1087
V (ψ0, ψ1)
1.5
0.5
+I
+I
0
1
0.5 −I
−I 1
−∇ V 0.5
0
0
−0.5
ψ −0.5 ψ
1 −1 0
−1
Figure 15.2. Illustration of the cost function V for the system 9 $ f 0 ; 1 g and of the
gradient rV projected onto the straight line tangent to the unit circle.
0
i;kC1
i;kC1 Dv (15.18)
u C1
u X
t . `;kC1 /
0 2
`D1
P
Note that, if the term i;k ` j `;k j is omitted in (15.17), with good approximation the
direction of the steepest descent is still followed, provided that the adaptation gain ¼ is
sufficiently small.
From (15.17), for each parameter i the updating consists of a correction toward zero
of a fixed value and a correction in the opposite direction of a value proportional to the
+∆
Reduce by −
ψi,0
−∆
0 1 2 3 4 5 6
+∆ Rescale by ε ψi,1
’
’ = ψi,1
(1+ε)ψi,1
ψi,1
’
0 1 2 3 4 5 6
+∆
Reduce by −
ψi,1
0 1 2 3 4 5 6
Rescale by ε ψ ’i,2
0 1 2 3 4 5 6
ψi,k
0 1 2 3 4 5 6
parameter amplitude. Assuming that the initial point is not a saddle point, by repeated
iterations of the algorithm, one of the parameters i approaches the value one, while all
others converge to zero, as shown in Figure 15.3.
We now want to obtain the same adaptation rule for the parameters f i g using the output
signal yk . We define ψ k as the vector of the parameters of ψ at instant k,
ψ k D [: : : ; 0;k ; 1;k ; : : : ]
T
(15.19)
and
ak D [: : : ; ak ; ak1 ; : : : ]T (15.20)
15.1. Problem definition and fundamentals 1089
Therefore
yk D ψ kT ak (15.21)
Assume that at the beginning of the adaptation process the overall system 9 satisfies the
condition jjψjj2 D 1, but it deviates significantly from the system identity; then the equalizer
output signal yk will occasionally assume positive or negative values which are much P larger
in magnitude than Þmax D max ak . The peak value of yk is given by šÞmax i j i;k j,
obtained with symbols faki g equal to šÞmax , and indicates that the distortion is too large
and must be reduced. In this case a correction of a fixed value towards zero is obtained
using the error signal
ψ kC1 D ψ k ¼ ek ak (15.23)
If the coefficients are scaled so that the condition jjψ k jj2 D 1 is satisfied at every k, we
obtain a coefficient updating algorithm that approximates algorithms (15.17) and (15.18).
Obviously algorithm (15.23) cannot be directly applied, as the parameters of the over-
all system 9 are not available. However, observe that if the linear transformation H is
non-singular, then formally C D H 1 9. Therefore the overall minima of V at the points
9 D šI are mapped into overall minima at the points C D šH 1 of a cost function J
that is the image of V under the transformation given by H 1 , as illustrated in Figure 15.4.
Furthermore, it is seen that the direction of steepest descent of V has not been modified by
this transformation. Thus the updating terms for the equalizer coefficients are still given by
(15.22) and (15.23), if symbols ak are replaced by the channel output samples x k .
Then, a general algorithm that converges to the desired solution C D H 1 can be
formulated as follows:
ž observe the equalizer output signal fyk g and determine its peak value;
H −1
ψ C
ž whenever a peak value occurs, update the coefficients according to the algorithm
ek D yk Þmax sgn.yk /
(15.24)
ckC1 D ck ¼ ek xk
ž scale the coefficients so that the statistical power of the equalizer output samples is
equal to the statistical power of the channel input symbols.
We observe that it is not practical to implement an algorithm that requires computing the
peak value of the equalizer output signal and updating the coefficients only when a peak
value is observed. In the next sections we describe algorithms that allow the updating of
the coefficients at every modulation interval, thus avoiding the need of computing the peak
value of the signal at the equalizer output and of scaling the coefficients.
where
E[ak2 ]
S D (15.26)
E[jak j]
The gradient of J is given by
which assumes the meaning of pseudo error that during self-training replaces the error used
in the LMS decision directed algorithm. We recall that for the LMS algorithm the error
signal is4
ek D yk aO k (15.29)
where aO k is the detection of the symbol ak , obtained by a threshold detector from the sample
yk . Figure 15.5 shows the pseudo error žS ;k as a function of the value of the equalizer output
sample yk .
4 Note that in this chapter, the error signal is defined with opposite sign with respect to the previous chapters.
15.2. Three algorithms for PAM systems 1091
e S,k
* gs gS yk
Figure 15.5. Characteristic of the pseudo error žS,k as a function of the equalizer output.
Therefore the Sato algorithm for the coefficient updating of an adaptive equalizer assumes
the expression
ckC1 D ck ¼ ž S;k xk (15.30)
It was proved by Benveniste, Goursat, and Ruget [9] that, if the probability density
function of the output symbols fak g is sub-Gaussian,5 then the Sato cost function (15.25)
admits as unique points of minimum the systems C D šH 1 , apart from a possible lag;
however, note that the uniqueness of the points of minimum of the Sato cost function is
obtained by assuming a continuous probability distribution of input symbols. In the case
of a discrete probability distribution with an alphabet A D fš1; š3; : : : ; š.M 1/g, the
convergence properties of the Sato algorithm are not always satisfactory.
Another undesirable characteristic of the Sato algorithm is that the pseudo error ž S;k is
not equal to zero for C D šH 1 , unless we consider binary transmission. In fact only the
gradient of the cost function given by (15.27) is equal to zero for C D šH 1 ; moreover,
we find that the variance of the pseudo error may assume non-negligible values in the
neighborhood of the desired solution.
Benveniste–Goursat algorithm
To mitigate the above mentioned inconvenience, we observe that the error ek used in the
LMS algorithm in the absence of noise becomes zero for C D šH 1 . It is possible to
combine the two error signals obtaining the pseudo error proposed by Benveniste and
Goursat [10], given by
žG;k D 1 ek C 2 jek j ž S;k (15.31)
5 A probability density function pak .Þ/ is sub-Gaussian if it is uniform or if pak .Þ/ D K expfg.Þ/g, where
dg
g.Þ/ is an even function such that both g.Þ/ and Þ1 dÞ are strictly increasing in the domain [0; C1/.
1092 Chapter 15. Self-training equalization
where 1 and 2 are constants. If the distortion level is high, jek j assumes high values and
the second term of (15.31) allows convergence of the algorithm during the self-training.
Near convergence, for C ' šH 1 , the second term has the same order of magnitude as
the first and the pseudo error assumes small values in the neighborhood of C D šH 1 .
Note that an algorithm that uses the pseudo error (15.31) allows a smooth transition
of the equalizer from the self-training mode to the decision-directed mode. In the case of
a sudden change in channel characteristics, the equalizer is found working again in self-
training mode. Thus the transitions between the two modes occur without control on the
level of distortion of the signal fyk g at the equalizer output.
Stop-and-go algorithm
The stop-and-go algorithm proposed by Picchi and Prati [11] can be seen as a variant
of the Sato algorithm that achieves the same objectives of the Benveniste–Goursat algo-
rithm with better convergence properties. The pseudo error for the stop-and-go algorithm
is formulated as
(
ek if sgn.ek / D sgn.ž S;k /
ž P;k D (15.32)
0 otherwise
where ž S;k is the Sato pseudo error given by (15.28), and ek is the error used in the
decision-directed algorithm given by (15.29). The basic idea is that the algorithm converges
if updating of the equalizer coefficients is turned off with sufficiently high probability
every time the sign of error (15.29) differs from the sign of error ei d;k D yk ak , that is
sgn.ek / 6D sgn.ei d;k /. As ei d;k is not available in a self-training equalizer, with the stop-
and-go algorithm coefficient updating is turned off whenever the sign of error ek is different
from the sign of Sato error ž S;k . Obviously, in this way we also get a non-zero probability
that coefficient updating is inactive when the condition sgn.ek / D sgn.ei d;k / occurs, but
this does not usually bias the convergence of the algorithm.
Remarks
At this point we can make the following observations.
ž Self-training algorithms based on the minimization of a cost function that includes
the term E[jyk j p ], p ½ 2, can be explained referring to the algorithm (15.24), because
the effect of raising to the p-th power the amplitude of the equalizer output sample
is that of emphasizing the contribution of samples with large amplitude.
ž Extension of the Sato cost function (15.25) to QAM systems, which we discuss in
Section 15.5, is given by
h i
J D E 12 jyk j2 S jyk j (15.33)
where S D E[jak j2 ]=E[jak j]. In general this term guarantees that, at convergence,
the statistical power of the equalizer output samples is equal to the statistical power
of the input symbols.
15.3. The contour algorithm for PAM systems 1093
ž In the algorithm (15.24), the equalizer coefficients are updated only when we observe
a peak value of the equalizer output signal. As the peak value decreases with the
progress of the equalization process, updating of the equalizer coefficients ideally
depends on a threshold that varies depending on the level of distortion in the overall
system impulse response.
Im[ yk ]
T0
Tk
α max
α max Re[ yk ]
In this algorithm, to avoid the computation of the threshold Tk at each iteration, the am-
plitude of the equalizer output signal is compared with Þmax rather than with Tk ; note that
the computation of 2.yk / depends on the event that yk falls inside or outside the con-
stellation boundary. In the next section, for a two-dimensional constellation, we define in
general the constellation boundary as the contour line that connects the outer points of the
constellation; for this reason we refer to this algorithm as the contour algorithm [12].
To derive the algorithms (15.34) and (15.35) from the algorithm (15.24) several approxi-
mations are introduced; consequently the convergence properties cannot be directly derived
from those of the algorithm (15.24). In Appendix 15.A we show how C A should be defined
to obtain the desired behavior of the algorithms, (15.34) and (15.35) in the case of systems
with input symbols having a uniform continuous distribution.
An advantage of the functional introduced in this section with respect to the Sato cost
function is that the variance of the pseudo error vanishes at the points of minimum 9 D šI ;
this means that it is possible to obtain the convergence of the MSE to a steady state
value that is close to the achievable minimum value. Furthermore, the radial component
of the gradient of E[8.yk /] vanishes at every point on the unit sphere, whereas the radial
component of the gradient in the Sato cost function vanishes on the unit sphere only at the
points 9 D šI . As the direction of steepest descent does not intersect the unit sphere, the
contour algorithm avoids overshooting of the convergence trajectories observed using the
Sato algorithm; in other words, the stochastic gradient yields a coefficient updating that is
made in the correct direction more often than in the case of the Sato algorithm. Therefore,
substantially better convergence properties are expected for the contour algorithm even in
systems with a discrete probability distribution of input symbols.
The complexity of the algorithms (15.34) and (15.35) can be deemed prohibitive for
practical implementations, especially for self-training equalization in high-speed commu-
nication systems, as the parameter C A;k must be estimated at each iteration. In the next
15.3. The contour algorithm for PAM systems 1095
section, we discuss a simplified algorithm that allows implementation with low complexity;
we will see later how the simplified formulation of the contour algorithm can be extended
to self-training equalization of partial response and QAM systems.
where fwk g denotes additive white Gaussian noise. The equalizer output is given by yk D
ckT xk . To obtain an algorithm that does not require the knowledge of the parameter C A ,
the definition (15.34) suggests introducing the pseudo error
(
yk Þmax sgn.yk / if jyk j ½ Þmax
žC A;k D (15.37)
Žk sgn.yk / otherwise
where Žk is a non-negative parameter that is updated at every iteration as follows:
8
> M 1
< Žk
> 1 if jyk j ½ Þmax
M
ŽkC1 D (15.38)
>
> 1
: Žk C 1 otherwise
M
and 1 is a positive constant. The initial value Ž0 is not a critical system parameter and can
be, for example, chosen equal to zero; the coefficient updating algorithm is thus given by
ckC1 D ck ¼ žC A;k xk (15.39)
In comparison to (15.34), now Žk does not provide a measure of distortion as C A . The
definition (15.37) is justified by the fact that the term yk [Þmax C C A ] sgn.yk / in (15.34)
can be approximated as yk Þmax sgn.yk /, because if the event jyk j ½ Þmax occurs the
pseudo error yk Þmax sgn.yk / can be used for coefficient updating. Therefore, Žk should
increase in the presence of distortion only in the case the event jyk j < Þmax occurs more
frequently than expected. This behavior of the parameter Žk is obtained by applying (15.38).
Moreover, (15.38) guarantees that Žk assumes values that approach zero at the convergence
of the equalization process; in fact, in this case the probabilities of the events fjyk j < Þmax g
and fjyk j ½ Þmax g assume approximately the values .M 1/=M and 1=M, respectively, that
correspond to the probabilities of such events for a noisy PAM signal correctly equalized.
Figure 15.7 shows the pseudo error žC A;k as a function of the value of the equalizer output
sample yk .
The contour algorithm has been described for the case of uniformly distributed input
symbols; however, this assumption is not necessary. In general, if fyk g represents an equal-
ized signal, the terms .M 1/=M and 1=M in (15.38) are, respectively, substituted by
p0 D P[jak j < Þmax ] C 1
2 P[jak j D Þmax ] ' P[jyk j < Þmax ] (15.40)
1096 Chapter 15. Self-training equalization
εCA,k
+δk
+α max
−α max yk
−δk
Figure 15.7. Characteristic of the pseudo error žCA,k as a function of the equalizer output.
and
p1 D 1
2 P[jak j D Þmax ] ' P[jyk j ½ Þmax ] (15.41)
We note that the considered receiver makes use of signal samples at the symbol rate; the
algorithm can also be applied for the initial convergence of a fractionally spaced equalizer
(see Section 8.4) in case a sampling rate higher than the symbol rate is adopted.
Figure 15.8. Block diagram of a self-training equalizer for a PR-IV system using the Sato
algorithm.
6 In general, for an ideal PR-IV channel in the absence of noise, if the alphabet of the input symbols is
A D fš1; š3; : : : ; š.M 1/g, the output symbols assume one of the .2M 1/ values f0; š2; : : : ; š2.M 1/g.
1098 Chapter 15. Self-training equalization
Figure 15.9. Block diagram of a self-training equalizer with the contour algorithm for a
QPR-IV system.
1100 Chapter 15. Self-training equalization
In the described scheme, the samples of the received signal can be initially filtered by
a filter with transfer function 1=.1 a D 2 /, 0 < a < 1, to reduce the correlation among
samples. The obtained signal is then input to the equalizer delay line.
We let
where
8.v/ D 1
2 v 2 S jvj (15.57)
Figure 15.10. Block diagram of a self-training equalizer with the Sato algorithm for a QAM
system.
15.5. Self-training equalization for QAM systems 1101
and
2 ] 2 ]
E[ak;Q
E[ak;I
S D D (15.58)
E[jak;I j] E[jak;Q j]
The gradient of (15.56) with respect to c yields (see also (8.380))
The constant R p is chosen so that the gradient is equal to zero for a perfectly equalized
system; therefore we have
E[jak j2 p ]
Rp D (15.68)
E[jak j p ]
For example, for a 64-QAM constellation, we obtain R1 D 6:9 and R2 D 58. The 64-QAM
constellation and the circle of radius R1 D 6:9 are illustrated in Figure 15.12.
By using (15.67), we obtain the equalizer coefficient updating law
yQk Ł
ckC1 D ck ¼.j yQk j R1 / x (15.70)
j yQk j k
We note that the Sato algorithm, introduced in Section 15.2, can then be viewed as a
particular case of the CMA.
Im[ ak ]
R1 =6.9
R1
Re[ ak ]
1 3
Figure 15.12. The 64-QAM constellation and the circle of radius R1 D 6:9.
is still valid, and the complex-valued baseband equivalent channel output is given by
X
C1
xk D h i aki C wk (15.71)
i D1
where, by analogy with (15.40), the probability pS ' P[yk 2 S] is computed assuming
that yk is an equalized signal in the presence of additive noise.
1104 Chapter 15. Self-training equalization
Im[ yk ]
~y C ~y
k k
Re[ yk ]
Figure 15.13. Illustration of the contour line and surface S for a 64-QAM constellation.
Let Þmax be the maximum absolute value of the real and imaginary parts of the square
L ð L symbol constellation. If jyk;I j ½ Þmax or jyk;Q j ½ Þmax , but not both, the projection
of the sample yk on the contour line C yields a non-zero pseudo error along one dimension
and a zero error in the other dimension. If both jyk;I j and jyk;Q j are larger than Þmax , ykC
is chosen as the corner point of the constellation closest to yk ; in this case we obtain a
non-zero pseudo error in both dimensions.
Thus the equalizer coefficients are updated according to the algorithm
Clearly, the contour algorithm can also be applied to systems that use non-square constel-
lations. In any case, the robust algorithm for carrier phase tracking that is described in the
next section requires that the shape of the constellation is non-circular.
As for equalizer coefficient updating, reliable information for updating the carrier phase
estimate 'Ok is only available if yk falls outside of the region S. As illustrated in Figure 15.14,
15.5. Self-training equalization for QAM systems 1105
Im[ yk ]
(−αmax ,+αmax ) (+αmax ,+αmax )
yC
k yk
∆ϕk
Re[yk ]
region S
region D
Figure 15.14. Illustration of the rotation of the symbol constellation in the presence of a
phase error, and definition of 1'k .
the phase estimation error can then be computed as (see also (15.65))
where ` is chosen such that Re[yn0 ] > jIm[yn0 ]j (shaded region in Figure 15.14). Furthermore,
we observe that the information on the phase error obtained by samples of the sequence fyk g
that fall in the corner regions, where jyk;I j > Þmax and jyk;Q j > Þmax , is not important.
Thus we calculate a phase error only if yk is outside of S, but not in the corner regions,
that is if yk0 2 D, with D D fyk0 : Re[yk0 ] > Þmax ; jIm[yk0 ]j < Þmax g. Then (15.76) becomes
(
Im[yk0 ][Re[yk0 ] Þmax ] if yk0 2 D
1'k D (15.78)
0 otherwise
In the presence of a frequency offset equal to =.2³ /, the probability distribution of the
samples fyk g rotates at a rate of =.2³ / revolutions per second. For large values of =.2³ /,
the phase error 1'k does not provide sufficient information for the carrier phase tracking
system to achieve a lock condition; therefore the update of 'Ok must be made by a second-
order PLL, where in the update of the second-order term a factor that is related to the value
of the frequency offset must be included (see Section 14.7).
1106 Chapter 15. Self-training equalization
The needed information is obtained observing the statistical behavior of the term Im[yk0 ],
conditioned by the event yk0 2 D. At instants in which the sampling distribution of yk is
aligned with S, the distribution of Im[yk0 ] is uniform in the range [Þmax ; Þmax ]. Between
these instants, the distribution of Im[yk0 ] exhibits a time varying behavior with a downward
or upward trend depending on the sign of the frequency offset, with a minimum variance
when the corners of the rotating probability distribution of yk , which we recall rotates at a
rate of =.2³ / revolutions per second, cross the coordinate axes. Defining
(
v if jvj < Þmax
Q.v/ D (15.79)
0 otherwise
from the observation of Figure 15.14 a simple method to extract information on =.2³ /
consists in evaluating
where m < k denotes the last time index for which yk0 2 D. In the mean, 1 Im[yk0 ] exhibits
the sign of the frequency offset.
The equations for the updating of the parameters of a second-order phase-locked loop
for the carrier phase recovery then become
(
'OkC1 D 'Ok C ¼' 1'k C 1'Oc;k
if yk0 2 D
1'Oc;kC1 D 1'Oc;k C ¼ f 1 1'k C ¼ f 2 1 Im[yk0 ]
( (15.81)
'OkC1 D 'Ok
otherwise
1'Oc;kC1 D 1'Oc;k
where ¼' , ¼ f 1 and ¼ f 2 are suitable adaptation gains; typically, ¼' is in the range 104
103 , ¼ f 1 D .1=4/¼2' , and ¼ f 2 ' ¼ f 1 .
The rotation of yk given by (15.77) to obtain yk0 also has the advantage of simplifying
the error computation for self-training equalizer coefficient adaptation with the contour
algorithm.
With no significant effect on performance, we can introduce a simplification similar to
that adopted to update the carrier phase, and let the pseudo error equal zero if yk is found
in the corner regions, that is žC A;k D 0 if Im[yk0 ] > Þmax . By using (15.72) and (15.73) to
compute the pseudo error, the coefficient updating equation (15.74) becomes (see (8.382))
Example 15.6.1
As a first example, we consider a 16-PAM system (M D 16) with a uniform probability
distribution of the input symbols and symbol rate equal to 25 MBaud; the transmit and
receive filters are designed to yield an overall raised cosine channel characteristic for a
cable length of 50 m. In the simulations, the cable length is chosen equal to 100 m, and the
received signal is disturbed by additive white Gaussian noise. The signal-to-noise ratio at the
receiver input is equal to 0 D 36 dB. Self-training equalization is achieved by a fractionally
spaced equalizer having N D 32 coefficients, and input signal sampled with sampling period
equal to T =2. Figure 15.15 shows the convergence of the contour algorithms (15.37) and
Figure 15.15. Illustration of the convergence of the contour algorithm for a 16-PAM system:
(a) behavior of the parameter Žn , (b) MSE convergence, (c) relative frequency of equalizer
output samples at the beginning and the end of the convergence process.
1108 Chapter 15. Self-training equalization
(15.38) for Ž0 D 0 and c0 chosen equal to the zero vector. The results are obtained for a
cable with attenuation (4.148) equal to Þ. f / j f D1 D 3:85 ð 106 [m1 Hz1=2 ], parameters
of the self-training equalizer given by ¼ D 105 and 1 D 2:5 ð 104 , and ideal timing
recovery.
Example 15.6.2
We consider self-training equalization for a baseband quaternary partial response class IV
system (M D 4) for transmission at 125 Mbit/s over UTP cables; a VLSI transceiver im-
plementation for this system will be described in Chapter 19. We compare the performance
of the contour algorithm, described in Section 15.3, with the Sato algorithm for partial
response systems. Various realizations of the MSE convergence of a self-training equalizer
with N D 16 coefficients are shown in Figures 15.16 and 15.17 for the Sato algorithm
and the contour algorithm, respectively. The curves are parameterized by t D 1T =T ,
where T D 16 ns, and 1T denotes the difference between the sampling phase of the
channel output signal and the optimum sampling phase that yields the minimum MSE;
we note that the contour algorithm has a faster convergence with respect to the Sato al-
gorithm and yields significantly lower values of MSE in the steady state. The Sato algo-
rithm can be applied only if timing recovery is achieved prior to equalization; note that
the convergence characteristics of the contour algorithm makes self-training equalization
possible even in the presence of considerable distortion and phase drift of the received
signal.
Figure 15.16. MSE convergence with the Sato algorithm for a QPR-IV system [13].
15.6. Examples of applications 1109
Figure 15.17. MSE convergence with the contour algorithm for a QPR-IV system [13].
0.4
Real part
Imaginary part
0.2
–0.2
–0.4
0 1 2 3 4 5 6
t/T
Figure 15.18. Overall baseband equivalent channel impulse response for simulations of a
c 1998 Springer-Verlag London, Ltd.]
256-QAM system. [From [12],
1110 Chapter 15. Self-training equalization
Example 15.6.3
We now examine self-training equalization for a 256-QAM transmission system having a
square constellation with L D 16 (M D 256), and symbol rate equal to 6 MBaud. Along
each dimension, symbols š3, š1 have probability 2=20, and symbols š15, š13, š11,
š9, š7, and š5 have probability 1=20. The overall baseband equivalent channel impulse
response is illustrated in Figure 15.18. The received signal is disturbed by additive white
Gaussian noise. The signal-to-noise ratio at the receiver input is equal to 0 D 39 dB. Signal
equalization is obtained by a fractionally spaced equalizer having N D 32 coefficients, and
input signal sampled with sampling period equal to T =2.
Figure 15.19 shows the convergence of the contour algorithm and the behavior of the
parameter Žk for pS D 361=400, various initial values of Ž0 , and c0 given by a vector
with all elements equal to zero except for one element. Results are obtained for ¼ D 104 ,
1 D 104 , and ideal timing and carrier phase recovery.
Example 15.6.4
With reference to the previous example, we examine the behavior of the carrier phase
recovery algorithm, assuming ideal timing recovery. Figure 15.20 illustrates the behavior
of the MSE and of the second-order term 1'Oc;k for an initial frequency offset of C2:5 kHz,
¼' D 4 ð 104 , ¼ f 1 D 8 ð 108 , and ¼ f 2 D 2 ð 108 .
Figure 15.19. Convergence behavior of MSE and parameter Žk using the contour algorithm
c 1998
for a 256-QAM system with non-uniform distribution of input symbols. [From [12],
Springer-Verlag London, Ltd.]
15. Bibliography 1111
Figure 15.20. Illustration of the convergence behavior of MSE and second-order term 1 'ˆ c,k
using the contour algorithm in the presence of an initial frequency offset equal to 500 ppm
for a 256-QAM system with non-uniform distribution of input symbols. [From [12], c 1998
Springer-Verlag London, Ltd.]
Bibliography
[1] H. Ichikawa, J. Sango, and T. Murase, “256 QAM multicarrier 400 Mb/s microwave
radio system field tests”, in Proc. 1987 IEEE Int. Conference on Communications,
pp. 1803–1808, 1987.
[2] F. J. Ross and D. P. Taylor, “An enhancement to blind equalization algorithms”, IEEE
Trans. on Communications, vol. 39, pp. 636–639, May 1991.
[3] J. G. Proakis and C. L. Nikias, “Blind equalization”, in Proc. SPIE Adaptive Signal
Processing, vol. 1565, pp. 76–87, July 22–24 1991.
[4] S. Bellini, “Blind equalization and deconvolution”, in Proc. SPIE Adaptive Signal
Processing, vol. 1565, pp. 88–101, July 22–24 1991.
[5] N. Benvenuto and T. W. Goeddel, “Classification of voiceband data signals using the
constellation magnitude”, IEEE Trans. on Communications, vol. 43, pp. 2759–2770,
Nov. 1995.
1112 Chapter 15. Self-training equalization
[6] R. Liu and L. Tong, eds, “Special issue on blind systems identification and estimation”,
IEEE Proceedings, vol. 86, Oct. 1998.
[7] Y. Sato, “A method of self-recovering equalization for multilevel amplitude-
modulation systems”, IEEE Trans. on Communications, vol. 23, pp. 679–682, June
1975.
[8] L. Tong, G. Xu, B. Hassibi, and T. Kailath, “Blind channel identification based on
second-order statistics: a frequency-domain approach”, IEEE Trans. on Information
Theory, vol. 41, pp. 329–334, Jan. 1995.
[9] A. Benveniste, M. Goursat, and G. Ruget, “Robust identification of a nonminimum
phase system: blind adjustment of a linear equalizer in data communications”, IEEE
Trans. on Automatic Control, vol. 25, pp. 385–399, June 1980.
[10] A. Benveniste and M. Goursat, “Blind equalizers”, IEEE Trans. on Communications,
vol. 32, pp. 871–883, Aug. 1984.
[11] G. Picchi and G. Prati, “Blind equalization and carrier recovery using a ‘Stop-and-Go’
decision directed algorithm”, IEEE Trans. on Communications, vol. 35, pp. 877–887,
Sept. 1987.
[12] G. Cherubini, S. Ölçer, and G. Ungerboeck, “The contour algorithm for self-training
equalization”, in Broadband Wireless Communications, 9th Tyrrhenian Int. Workshop
on Digital Communications (M. Luise and S. Pupolin, eds), Lerici, Italy, pp. 58–69,
Sept. 7–10 1997. Berlin: Springer-Verlag, 1998.
[13] G. Cherubini, S. Ölçer, and G. Ungerboeck, “Self-training adaptive equalization for
multilevel partial-response transmission systems”, in Proc. 1995 IEEE Int. Symposium
on Information Theory, Whistler, Canada, p. 401, Sept. 17–22 1995.
[14] G. Cherubini, “Nonlinear self-training adaptive equalization for partial-response sys-
tems”, IEEE Trans. on Communications, vol. 42, pp. 367–376, February/March/April
1994.
[15] D. N. Godard, “Self recovering equalization and carrier tracking in two-dimensional
data communication systems”, IEEE Trans. on Communications, vol. 28, pp. 1867–
1875, Nov. 1980.
15.A. On the convergence of the contour algorithm 1113
Examine the function V.9/N on the unit sphere S. To claim that the only minima of V.9/ N
are found at points 9 D šI , we apply Theorem 3.5 of [9]. Consider a pair of P indices .i; j/,
i 6D j, and a fixed system with coefficients f N ` g`6Di; j , such that R 2 D 1 `6Di; j N `2 > 0.
N ' 2 S be the system with coefficients f N ` g`6Di; j , N i D R cos ', and
For ' 2 [0; 2³ /, let 9
N j D R sin '; moreover, let .@=@'/V.9 N ' / be the derivative of V.9 N ' / with respect to ' at
point 9 D 9' . As pak .Þ/ is sub-Gaussian, it can be shown that
N N
@ ³
V.9
N '/ D 0 for ' D k k2Z (15.85)
@' 4
and
@ ³
V.9
N '/ > 0 for 0 < ' < (15.86)
@' 4
1114 Chapter 15. Self-training equalization
From the above equations we have that the stationary points of V.9 N ' / correspond to systems
characterized by the property that all non-zero coefficients have the same absolute value.
Furthermore, using symmetries of the problem, we find the only minima are at šI , except
for a possible delay, and the other stationary points of V.9 N ' / are saddle points.
The study of the functional V is then extended to the entire parameter space. As the
results obtained for the restriction of V to S are also valid on a sphere of arbitrary radius r,
we need to study only the radial derivatives of V. For this reason, we consider the function
V.r/
Q D V.r 9/,
N whose first and second derivatives are
Z Z
V .r/ D b 2.r b/ p yNk .b/ db .9N C r / jbj p yNk .b/ db
Q 0 Q (15.87)
and
Z Z
VQ 00 .r/ D b2 2
Q 0 .r b/ p yNk .b/ db r0 jbj p yNk .b/ db (15.88)
is negative for r < r0 and positive for r > r0 . For a fixed point 9 N 2 S, r0 is given by the
solution of the equation
Z Z
Q b/ p yNk .b/ db .9N C r / jbj p yNk .b/ db D 0
b 2.r (15.89)
Chapter 16
The algorithms and structures discussed in this chapter can be applied to both wired and
wireless systems, even though transmission systems over twisted-pair cables will be con-
sidered to describe examples of applications. Full-duplex data transmission over a single
twisted-pair cable permits the simultaneous flow of information in two directions using
the same frequency band. Examples of applications of this technique are found in digital
communications systems that operate over the telephone network. In a digital subscriber
loop, at each end of the full-duplex link, a circuit called hybrid separates the two direc-
tions of transmission. To avoid signal reflections at the near and far-end hybrid, a precise
knowledge of the line impedance would be required. As the line impedance depends on
line parameters that, in general, are not exactly known, an attenuated and distorted replica
of the transmit signal leaks to the receiver input as an echo signal. Data-driven adaptive
echo cancellation mitigates the effects of impedance mismatch.
A similar problem is caused by cross-talk in transmission systems over voice-grade
unshielded twisted-pair cables for local-area network applications, where multipair cables
are used to physically separate the two directions of transmission. Cross-talk is a statistical
phenomenon due to randomly varying differential capacitive and inductive coupling between
adjacent two-wire transmission lines (see Section 4.4.2). At the rates of several megabit-per-
second that are usually considered for local-area network applications, near–end cross-talk
(NEXT) represents the dominant disturbance; hence adaptive NEXT cancellation must be
performed to ensure reliable communications.
A different problem shows up in frequency–division duplexing transmission, where dif-
ferent frequency bands are used in the two directions of transmission. In such a case far–end
cross-talk (FEXT) interference is dominant. This situation occurs, for instance, in very–
high–speed digital subscriber line (VDSL) systems, where multiple users are connected to
a central station via unshielded twisted-pairs located in the same cable binder.
In voiceband data modems the model for the echo channel is considerably different
from the echo channel model adopted for baseband transmission. The transmitted signal
is a passband QAM signal, and the far-end echo may exhibit significant carrier-phase
jitter and carrier-frequency shift, which are caused by signal processing at intermediate
points in the telephone network. Therefore a digital adaptive echo canceller for voiceband
modems needs to embody algorithms that account for the presence of such additional
impairments.
1116 Chapter 16. Applications of interference cancellation
In the first three sections of this chapter,1 we describe the echo channel models and
adaptive echo canceller structures for various digital communications systems, which are
classified according to the employed modulation techniques. We also address the trade-offs
between complexity, speed of adaptation, and accuracy of cancellation in adaptive echo
cancellers. In the last section, we address the problem of FEXT interference cancellation
for upstream transmission in a VDSL system. The system is modelled as a multi–input
multi–output (MIMO) system, to which multi–user detection techniques (see Section 10.4)
can be applied.
X
C1 X
C1
x.t/ D r.t/ C u.t/ C w R .t/ D akR h.t kT / C ak h E .t kT / C w R .t/
kD1 kD1
(16.1)
where fakR g is the sequence of symbols from the remote transmitter, and h.t/ and h E .t/ D
fh D=A Ł g E g.t/ are the impulse responses of the overall channel and of the echo channel,
1 The material presented in Sections 16.1–16.3 is reproduced with permission from G. Cherubini, “Echo Can-
cellation,” in The Mobile Communications Handbook (J. D. Gibson, ed.), Ch. 7, pp. 7.1–7.15, Boca Raton, FL:
CRC Press, 1999, 2nd ed. c 1999 CRC Press, Boca Raton, FL.
16.1. Echo and near–end cross-talk cancellation for PAM systems 1117
respectively. In the expression of h E .t/, the function h D=A .t/ is the inverse Fourier transform
of HD=A . f /: The signal obtained after echo cancellation is processed by a detector that out-
puts the sequence of detected symbols faO kR g.
Tx rate R b /2 Tx
rate R b rate R b
H H
Rc Rc
echo
external
self−NEXT loop
timing timing
Tx Tx
H H
rate R b rate Rb
Rc rate R b /2 Rc
echo
form of interpolation (see Chapter 14) would be required that can significantly increase the
transceiver complexity.
{ak }
t=l T
F0
canceller is independent of the other F0 1 units, in the remaining part of this section we
will consider the operations of a single echo canceller.
The estimate of the echo is subtracted from the received signal. The result is defined as the
cancellation error signal
z k D x k uO k D x k ckT ak (16.4)
The echo attenuation that must be provided by the echo canceller to achieve proper sys-
tem operation depends on the application. For example, for the integrated services digital
network (ISDN) U-Interface transceiver, the echo attenuation must be larger than 55 dB
[2]. It is then required that the echo signals outside of the time span of the echo canceller
delay line be negligible, i.e., h E;n ³ 0 for n < 0 and n > N 1: As a measure of system
performance, we consider the mean-square error Jk at the output of the echo canceller at
time t D kT , defined by
Jk D E[z k2 ] (16.5)
1120 Chapter 16. Applications of interference cancellation
For a particular coefficient vector ck , substitution of (16.4) into (16.5) yields (see (2.17))
where p D E[x k ak ] and R D E[ak akT ]. With the assumption of i.i.d. transmitted symbols,
the correlation matrix R is diagonal. The elements on the diagonal are equal to the variance
of the transmitted symbols, ¦a2 D .M 2 1/=3. From (2.40) the minimum mean-square error
is given by
Adaptive canceller
By the LMS algorithm, the coefficients of the echo canceller converge in the mean to copt .
The LMS algorithm (see Section 3.1.2) for an N -tap adaptive linear transversal filter is
formulated as follows:
ckC1 D ck C ¼ z k ak (16.8)
where the term ckT R ck represents an excess mean-square error due to the misadjustment
of the filter settings. Under the assumption that the vectors ck and ak are statistically
independent, the dynamics of the mean-square error are given by (see (3.272))
2Jmin
Jk D ¦02 [1 ¼¦a2 .2 ¼N ¦a2 /]k C (16.10)
2 ¼N ¦a2
where ¦02 is determined by the initial conditions. The mean-square error converges to a
finite steady-state value J1 if the stability condition 0 < ¼ < 2=.N ¦a2 / is satisfied. The
optimum adaptation gain that yields fastest convergence at the beginning of the adaptation
process is ¼opt D 1=.N ¦a2 /. The corresponding time constant and asymptotic mean-square
error are −opt D N and J1 D 2Jmin , respectively.
We note that a fixed adaptation gain equal to ¼opt could not be adopted in practice,
as after echo cancellation the signal from the remote transmitter would be embedded in a
residual echo having approximately the same power. If the time constant of the convergence
16.1. Echo and near–end cross-talk cancellation for PAM systems 1121
ak
Σ Σ Σ Σ
c0,k c1,k cN−2,k c N−1,k
µ
+
u^k
zk xk
−
+
mode is not a critical system parameter, an adaptation gain smaller than ¼opt will be
adopted to achieve an asymptotic mean-square error close to Jmin . On the other hand, if
fast convergence is required, a variable adaptation gain will be chosen.
Several techniques have been proposed to increase the speed of convergence of the LMS
algorithm. In particular, for echo cancellation in data transmission, the speed of adaptation
is reduced by the presence of the signal from the remote transmitter in the cancellation error.
To mitigate this problem, the data signal can be adaptively removed from the cancellation
error by a decision-directed algorithm [3].
Modified versions of the LMS algorithm have been also proposed to reduce system
complexity. For example, the sign algorithm suggests that only the sign of the error signal
be used to compute an approximation of the gradient [4]. An alternative means to reduce
the implementation complexity of an adaptive echo canceller consists in the choice of a
filter structure with a lower computational complexity than the transversal filter.
X
W 1 X
W 1
ak D .2ak.w/ 1/ 2w D bk.w/ 2w (16.11)
wD0 wD0
Note that the summation within parenthesis in (16.12) may assume at most 2 K distinct real
.w/
values, one for each binary sequence fak`K m g, m D 0; : : : ; K 1. If we precompute
K
these 2 values and store them in a look-up table addressed by the binary sequence, we
can substitute the real time summation by a simple reading from the table.
Equation (16.12) suggests that the filter output can be computed using a set of L2 K
values that are stored in L tables with 2 K memory locations each. The binary vectors
a.w/ .w/ .w/
k;` D [ak`K ; : : : ; ak`K K C1 ], w D 0; : : : ; W 1, ` D 0; : : : ; L 1, determine the
addresses of the memory locations where the values that are needed to compute the filter
output are stored. The filter output is obtained by W L table look-up and shift-and-add
operations.
We observe that a.w/ k;` and its binary complement a N .w/
k;` select two values that differ only
in their sign. This symmetry is exploited to halve the number of values to be stored.
To determine the output of a distributed-arithmetic filter with reduced memory size, we
reformulate (16.12) as
" #
X X
L1 W 1 KX1
w .w/ .w/ .w/
uO k D 2 bk`K c`K ;k C bk`K bk`K m c`K Cm;k (16.13)
`D0 wD0 mD1
.w/
Then the binary symbol bk`K determines whether a selected value is to be added or
subtracted. Each table has now 2 K 1 memory locations, and the filter output is given by
X X
L1 W 1
.w/ .w/
uO k D 2w bk`K dk .i k;` ; `/ (16.14)
`D0 wD0
16.1. Echo and near–end cross-talk cancellation for PAM systems 1123
where dk .n; `/, n D 0; : : : ; 2 K 1 1, are the look-up values stored in the `-th table,
` D 0; : : : ; L 1, whose values are
.w/
X
K 1
.w/ .w/
dk .n; `/ D c`K ;k C bk`K bk`K m c`K Cm;k n D i k;`
mD1
.w/
and i k;` ; w D 0; : : : ; W 1; ` D 0; : : : ; L 1, are the look-up indices computed as follows:
8
>
> X
K 1
.w/ .w/
> ak`K m1
>
< m 2 if ak`K D1
.w/ mD1
i k;` D K 1 (16.15)
>
> X .w/ .w/
>
> aN k`K m 2 m1
if ak`K D 0
:
mD1
We note that, as long as (16.12) and (16.13) hold for some coefficient vector [c0;k ; : : : ;
c N 1;k ], a distributed-arithmetic filter emulates the operation of a linear transversal filter.
For arbitrary values dk .n; `/, however, a non-linear filtering operation results.
The expression of the LMS algorithm to update the values of a distributed-arithmetic
echo canceller is derived as in (3.280). To simplify the notation we set
X
W 1
.w/ .w/
uO k .`/ D 2w bk`K dk .i k;` ; `/ (16.16)
wD0
We also define the vector of the values in the `-th look-up table as
where
rdk .`/ z k2 D 2z k rdk .`/ z k D 2z k rdk .`/ uO k D 2z k rdk .`/ uO k .`/ (16.20)
The last expression has been obtained using (16.17) and the fact that only uO k .`/ depends
on dk .`/. Defining
(16.19) becomes
For a given value of k and `, we assign the following values to the W addresses (16.15):
.w/
I .w/ D i k;` w D 0; 1; : : : ; W 1
X
W 1
.w/
yk .n; `/ D 2w bk`K ŽnI .w/ (16.22)
wD0
In conclusion, in (16.21) for every instant k and for each value of the index w D
.w/
0; 1; : : : ; W 1, the product 2w bk`K ¼z k is added to the memory location indexed by
I .w/ . The complexity of the implementation can be reduced by updating, at every iteration
k, only the values corresponding to the addresses given by the most significant bits of the
symbols in the filter delay line. In this case (16.22) simplifies into
(
.W 1/
2W 1 bk`K n D I .W 1/
yk .n; `/ D (16.23)
0 n 6D I .W 1/
The block diagram of an adaptive distributed-arithmetic echo canceller with input sym-
bols from a quaternary alphabet is shown in Figure 16.6.
The analysis of the mean-square error convergence behavior and steady-state perfor-
mance can be extended to adaptive distributed-arithmetic echo cancellers [6]. The dynamics
of the mean-square error are in this case given by
½k
¼¦a2 2Jmin
Jk D ¦02 1 K 1 .2 ¼L¦a2 / C (16.24)
2 2 ¼L¦a2
The stability condition for the echo canceller is 0 < ¼ < 2=.L¦a2 /. For a given adaptation
gain, echo canceller stability depends on the number of tables and on the variance of
the transmitted symbols. Therefore, the time span of the echo canceller can be increased
without affecting system stability, provided that the number L of tables is kept constant. In
that case, however, mean-square error convergence will be slower. From (16.24), we find
that the optimum adaptation gain that permits the fastest mean-square error convergence
at the beginning of the adaptation process is ¼opt D 1=.L¦a2 /. The time constant of the
convergence mode is −opt D L2 K 1 . The smallest achievable time constant is therefore
proportional to the total number of values. As mentioned above, the implementation of a
distributed-arithmetic echo canceller can be simplified by updating at each iteration only
the values that are addressed by the most significant bits of the symbols stored in the delay
line. The complexity required for adaptation can thus be reduced at the price of a slower
rate of convergence.
(0) (1)
ak(0) ak(1) a k−(L−1)K a k−(L−1)K
address address
computation computation
(1) (0) (1) (0)
i k,0 ik,0 i k,L−1 i k,L−1
µ zk µ zk
table table
(1)
d k+1 (i k,0 ,0) 0 L−1
+ +
+ (1) (0)
+ (1)
d k (i k,0 ,0) d k (i k,0 ,0) d k(i k,L−1 ,L−1)
(0)
d k(i k,L−1 ,L−1)
← ← (1)
1 +1 bk(1) 1 +1 b k−(L−1)K
← ←
0 −1 0 −1
← ← (0)
1 +1 bk(0) 1 +1 b k−(L−1)K
← ←
0 −1 0 −1
2 2
zk u^k
xk
The output of the echo channel is represented as the sum of two contributions. The
near-end echo u N E .t/ arises from the impedance mismatch between the hybrid and the
transmission line, as in the case of baseband transmission. The far-end echo u F E .t/ repre-
sents the contribution due to echoes that are generated at intermediate points in the telephone
network. These echoes are characterized by additional impairments, such as jitter and fre-
quency shift, which are accounted for by introducing a carrier-phase rotation equal to '.t/
in the model of the far-end echo.
At the receiver, samples of the signal at the channel output are obtained synchronously
with the transmitter timing, at the sampling rate of Q 0 =T samples/s. The discrete-time
received signal is converted to a complex-valued baseband signal fx k F0 Ci g, i D 0; : : : ;
F0 1, at the rate of F0 =T samples/s, 1 < F0 < Q 0 , through filtering by the receive phase
splitter filter, decimation, and demodulation. From delayed transmit symbols, estimates
of the near and far-end echo signals after demodulation, fuO kNFE0 Ci g, i D 0; : : : ; F0 1, and
fuO kFFE0 Ci g, i D 0; : : : ; F0 1, respectively, are generated using F0 interlaced near and far-end
echo cancellers. The cancellation error is given by
z ` D x` .uO `N E C uO `F E / (16.25)
In the considered implementation, the estimates of the echo signals after demodulation
are given by
NX
N E 1
uO kNFE0 Ci D cnNFE0 Ci;k akn i D 0; : : : ; F0 1 (16.26)
nD0
!
NX
F E 1
uO kFFE0 Ci D cnFFE0 Ci;k aknD E e j 'Ok F0 C1 i D 0; : : : ; F0 1 (16.27)
nD0
N E ; : : : ; cN E
F0 N N E 1;k ] and [c0;k ; : : : ; c F0 N F E 1;k ] are the coefficients of the F0 in-
where [c0;k FE FE
and
respectively.
The far-end echo phase estimate is computed by a second-order phase-lock loop algo-
rithm (see Section 14.7), where the following gradient approach is adopted:
(
'O`C1 D 'O` 12 ¼' r'O jz ` j2 C 1'` .mod 2³ /
(16.30)
1'`C1 D 1'` 12 ¼ f r'O jz ` j2
where ` D k F0 C i, i D 0; : : : ; F0 1, ¼' and ¼ f are parameters of the loop, and
@jz ` j2
r'O jz ` j2 D D 2 Imfz ` .uO `F E /Ł g (16.31)
@ 'O`
We note that the algorithm (16.30) requires F0 iterations per modulation interval, i.e., we
cannot resort to interlacing to reduce the complexity of the computation of the far-end echo
phase estimate.
1128 Chapter 16. Applications of interference cancellation
Case N L C1. We initially assume that the length of the echo channel impulse response
is N L C 1: Furthermore, we assume that the boundaries of the received blocks are
placed such that the last M samples of the k-th received block are expressed by the vector
(see (9.72))
xk D kR h C k h E C wk (16.32)
and k is the circulant matrix with elements generated by the local transmitter
2 3
Ak [0] Ak [M 1] : : : Ak [1]
6 Ak [1] Ak [0] : : : Ak [2] 7
6 7
k D 6 :: :: :: 7 (16.34)
4 : : : 5
Ak [M 1] Ak [M 2] : : : Ak [0]
Uk D diag.ak / H E (16.35)
where H E denotes the DFT of the vector h E . In this case, the echo canceller provides an
echo estimate that is given by
O k D diag.ak / Ck
U (16.36)
where Ck denotes the DFT of the vector ck of the N coefficients of the echo canceller filter
extended with M N zeros.
In the time domain, (16.36) corresponds to the estimate
uO k D k ck (16.37)
Case N > L C 1. In practice, however, we need to consider the case N > L C 1. The
expression of the cancellation error is then given by
k;k1 D
2 3
Ak [0] Ak [M 1] : : : Ak [M L] : : : Ak1 [L C 1]
Ak1 [M 1]
6 A k [1] Ak [0] : : : Ak [M L C 1] : : : Ak1 [L C 2] 7
Ak [M L]
6 7
6 :: 7
4 : 5
Ak [M 1] Ak [M 2] : : : Ak [M L 1] Ak [M L 2] : : : Ak [0]
(16.39)
1130 Chapter 16. Applications of interference cancellation
From (16.38) the expression of the cancellation error in the time domain is then given by
zk D xk k;k1 ck (16.40)
zk D xk k;k1 ck k ck (16.42)
Q k;k1 Ck
Z k D Xk (16.44)
where Q k;k1 D FM k;k1 F1 ; the echo canceller adaptation by the LMS algorithm in
M
the frequency domain takes the form
H
CkC1 D Ck C ¼
Q k;k1 Zk (16.45)
which entails a substantially lower computational complexity than the LMS algorithm, at
the price of a slower rate of convergence.
In DMT systems it is essential that the length of the channel impulse response be much
less than the number of subchannels, so that the reduction in rate due to the cyclic extension
may be considered negligible. Therefore, time-domain equalization is adopted in practice
to shorten the length of the channel impulse response. From (16.43), however, we observe
that transceiver complexity depends on the relative lengths of the echo and of the channel
impulse responses. To reduce the length of the cyclic extension as well as the computational
complexity of the echo canceller, various methods have been proposed to shorten both the
channel and the echo impulse responses jointly [9].
16.4. Multiuser detection for VDSL 1131
In this section, we address the problem of multiuser detection for upstream VDSL trans-
mission (see Chapter 17), where FEXT signals at the input of a VDSL receiver are viewed
as interferers that share the same channel as the remote user signal [10].
We assume knowledge of the FEXT responses at the central office and consider a
decision-feedback equalizer (DFE) structure with cross-coupled linear feedforward (FF)
equalizers and feedback (FB) filters for cross-talk suppression. DFE structures with cross-
coupled filters have also been considered for interference suppression in wireless CDMA
communications [11] and fast Ethernet transmission (see Appendix 19.A). Here we de-
termine the optimum DFE coefficients in a minimum mean-square error (MMSE) sense
assuming that each user adopts OFDM modulation for upstream transmission. A system
with reduced complexity may be considered for practical applications, in which for each
user and each subchannel only the most significant interferers are suppressed.
To obtain a receiver structure for multiuser detection that exhibits moderate complexity,
we assume that each user adopts FMT modulation with M subchannels for upstream trans-
mission (see Chapter 9). Hence the subchannel signals exhibit non-zero excess bandwidth
as well as negligible spectral overlap. Assuming upstream transmission by U users, the
system illustrated in Figure 16.9 is considered. In general, the sequences of subchannel
signal samples at the multicarrier demodulator output are obtained at a sampling rate equal
to a rational multiple F0 of the modulation rate 1=T . To simplify the analysis, here an
integer F0 ½ 2 is assumed.
We introduce the following definitions:
1. fak.u/ [i]g, sequence of i.i.d. complex-valued symbols from a QAM constellation A.u/ [i]
transmitted by user u over subchannel i, with variance ¦a2.u/ [i ] ;
2. faO k.u/ [i]g, sequence of detected symbols of user u at the output of the decision element
of subchannel i;
4. h .u;v/
FEXT;n [i], overall FEXT response of subchannel i, from user v to user u;
5. G .u/ [i], gain that determines the power of the signal of user u transmitted over
subchannel i;
6. fwQ n.u/ [i]g, sequence of additive Gaussian noise samples with correlation function
rw.u/ [i ] .m/.
1132 Chapter 16. Applications of interference cancellation
c 2001
Figure 16.9. Block diagram of transmission channel and DFE structure. [From [10],
IEEE.]
At the output of subchannel i of the user-u demodulator, the complex baseband signal
is given by
X
1
.u/ .u/
xn.u/ [i] D G .u/ [i] h nk F0 [i] ak [i] C
kD1
X
U X
1
C G .v/ [i] h .u;v/ .v/
Q n.u/ [i]
FEXT;nk F0 [i]ak [i] C w (16.47)
vD1 kD1
v6Du
For user u, symbol detection at the output of subchannel i is achieved by a DFE structure
such that the input to the decision element is obtained by combining the output signals of
U linear filters and U feedback filters from all users, as illustrated in Figure 16.9.
In practice, to reduce system complexity, for each user only a subset of all other user
signals (interferers) is considered as an input to the DFE structure [10]. The selection of
the subset signals is based on the power and the number of the interferers. This strategy,
however, results in a loss of performance, as some strong interferers may not be considered.
16.4. Multiuser detection for VDSL 1133
This effect is similar to the near–far problem in CDMA systems. To alleviate this problem,
it is necessary to introduce power control of the transmitted signals; a suitable method will
be described in the next section.
To determine the DFE filter coefficients, we assume M1 and M2 coefficients for each
FF and FB filter, respectively. We define the following vectors:
delay lines of the FF filters with input given by the demodulator output of user u at
subchannel i;
2. c.u;v/ [i] D [c0.u;v/ [i]; : : : ; c.u;v/ T
M1 1 [i]] , coefficients of the FF filter from the demodu-
lator output of user v to the decision element input of user u at subchannel i;
X
U
fc.u;v/ [i] x.v/ .u;v/
[i] a.v/
T T
C k F0 [i] C b k [i]g (16.48)
vD1
v6Du
Without loss of generality, we extend the technique developed in Section 8.5 for the
single-user fractionally-spaced DFE to determine the optimum coefficients of the DFE
structure for user u D 1. We introduce the following vectors and matrices:
1. h.u/ .u/ .u/ .u/
m [i] D G [i][h m F CM 1CD .u/ [i ] [i]; : : : ; h m F CD .u/ [i ] [i]] , vector of M1 samples
T
0 1 0
of the impulse response of subchannel i of user u.
3.
M2
X
R.1;1/ [i] D E[x.1/ .1/ Ł T
.1/Ł
[i] h.1/
T
k [i] xk [i]] ¦a2.1/ [i ] hm m [i]
mD1
!
X
V
¦a2.v/ [i ] h.1;v/ .1;v/T
Ł
C FEXT;m [i] hFEXT;m [i] (16.50)
vD2
M2
X .l;1/ .l;1/Ł T
.¦a2.l/ [i ] h.l/ .l/
Ł T
m [i] hm [i] C ¦a .1/ [i ] hFEXT;m [i] hFEXT;m [i]/
2
mD1
l D 2; : : : U (16.51)
1
X
U
.l; p/Ł .1; p/T C
C ¦a2. p/ [i ] hFEXT;m [i] hFEXT;m [i]C
A
pD2
p6Dl
l D 2; : : : U (16.52)
.l/Ł . j/T
R.l; j/ [i] D E[xk [i] xk [i]]
M2
X . j;1/T .l; j/Ł . j/T
.¦a2.1/ [i ] h.l;1/
Ł
FEXT;m [i] hFEXT;m [i] C ¦a . j/ [i ] hFEXT;m [i] hm [i]/
2
mD1
where R.1/ [i] in general is a positive semi-definite Hermitian matrix, for which we
assume here the inverse exists.
5.
Defining the vectors c.1/ [i] D [c.1;1/ [i]; c.1;2/ [i]; : : : ; c.1;U / [i]]T , and b.1/ [i] D
T T T
[b.1;1/ T
[i]; : : : ; b.1;U / [i]]T , the optimum coefficients are given by
T
.1/
copt [i] D [R.1/ [i]]1 p.1/ [i] (16.56)
and
2 3
h.1/ .2;1/ .U;1/
T T T
The MMSE value at the decision point of user 1 on subchannel i is thus given by
.1/ .1/
[i] D ¦a2.1/ [i ] p.1/ [i]copt
H
Jmin [i] (16.58)
The performance of an OFDM system is usually measured in terms of achievable bit rate
for given channel and cross-talk characteristics (see Chapter 13). The number of bits per
modulation interval than can be loaded with a bit-error probability of 107 on subchannel
i is given by (see (13.15))
!
¦a2.1/ [i ]
b.1/ [i] D log2 1 C .1/ 10.G code 0 gap;d B /=10 (16.59)
Jmin [i]
1136 Chapter 16. Applications of interference cancellation
where G code is the coding gain assumed to be the same for all users and all subchannels.
The achievable bit rate for user 1 is therefore given by
1 M
X 1
Rb.1/ D b.1/ [i] bit=s (16.60)
T i D0
ž for each individual user, the transmit signal PSD is determined by taking into account
the distribution of known target rates and estimated line lengths of users in the
network, and
ž the total power for upstream signals is kept to a minimum to reduce interference with
other services in the same cable binder.
In the preceding section, we found the expression (16.60) of the achievable upstream rate
for a user in a VDSL system with U users, assuming perfect knowledge of FEXT impulse
responses and multiuser detection. However, PBO may be applied by assuming only the
knowledge of the statistical behavior of FEXT coupling functions with no attempt to cancel
interference. In this case, the achievable bit rate of user u is given by
2 3
6 7
Z 6 7
6 .u/ .u/
P . f /jH . f /j 2 7
6 7
Rb.u/ D log2 61 C 10.G code 0gap;d B /=10 7 d f (16.61)
B 6 X
U 7
6 .v/ .u;v/ 7
4 P . f /jHFEXT . f /j2
C N 0 5
vD1
v6Du
where P .u/ . f / denote the PSD of the signal transmitted by user u, H.u/ . f / is the frequency
.u;v/
response of the channel for user u, HFEXT . f / is the FEXT frequency response from user
v to user u, and N0 is the PSD of additive white Gaussian noise. From Section 4.4.2, the
expression of the average FEXT power coupling function is given by
.u;v/
jHFEXT . f /j2 D kt f 2 min.L u ; L v /jH.v/ . f /j2 (16.62)
where L u and L v denote the lengths of the lines of user u and v, respectively, and kt is a
constant.
16.4. Multiuser detection for VDSL 1137
Assuming that the various functions are constant within each subchannel band for OFDM
modulation, we approximate (16.61) as
2 3
6 7
6 7
M
X 1 6 .u/ .u/ 7
.u/ 1 6 P . f i /jH . f i /j 2
.G code 0gap;d B /=10 7
Rb D log2 61 C 10 7 (16.63)
T 6 X
U 7
6 .u;v/ 7
P .v/ . f i /jHFEXT . f i /j2 C N0
i D0
4 5
vD1
v6Du
and
2.
Rb.u/ ½ Rb;target
.u/
u D 1; : : : ; U (16.66)
.u/
where Pmax is a constant maximum PSD value and Rb;target is the target rate for
user u.
In (16.66), Rb.u/ is given by (16.60) or (16.63), depending on the receiver implementation.
Finding the optimum upstream transmit power distribution for each user is therefore
equivalent to solving a non-linear programming problem in the U M parameters G .u/ [i],
u D 1; : : : ; U , i D 0; : : : ; M 1. The optimum values of these parameters that minimize
(16.64) can be found by simulated annealing [13, 14].
with either mode A or B have been proposed. PBO methods are also classified into methods
that allow shaping of the PSD of the transmitted upstream VDSL signal, e.g., the equalized
FEXT method, and methods that lead to an essentially flat PSD of the transmitted signal
over each individual upstream band, e.g., the average log method. Both the equalized FEXT
and the average log method, which are described below, comply with mode B.
The equalized FEXT method requires that the PSD of user u be computed as [12]
½
.u/ L ref jHref . f /j2
P . f / D min Pmax ; Pmax (16.67)
L v jH.u/ . f /j2
where L ref and Href denote a reference length and a reference channel frequency response,
respectively.
The average log method requires that, for an upstream channel in the frequency band
. f 1 ; f 2 /, user u adopt a constant PSD given by [15]
.u/
P .u/ . f / D P. f 1 ; f 2 / f 2 . f1; f2/ (16.68)
where P..u/
f 1 ; f 2 / is a constant PSD level chosen such that it satisfies the condition
Z f2
log2 [P..u/ .u/
f 1 ; f 2 / jH . f / j] d f D K . f 1 ; f 2 /
2
(16.69)
f1
where K . f 1 ; f 2 / is a constant.
In this section, the achievable rates of VDSL upstream transmission using the optimum
algorithm (16.64) and the average log method are compared for various distances and
services. The numerical results presented in this section are derived assuming a 26-gauge
telephone twisted-pair cable (see Table 4.2). The noise models for the alien-cross-talk
disturbers at the line termination and at the network termination are taken as specified in
[16] for the fiber-to-the-exchange case. Additive white Gaussian noise with a power spectral
density of –140 dBm/Hz is assumed. We consider upstream VDSL transmission of U D 40
users over the frequency band given by the union of B1 = (2.9 MHz, 5.1 MHz) and B2
= (7.05 MHz, 12.0 MHz), similar to those specified in [12]. The maximum PSD value is
Pmax D 60 dBm/Hz. FEXT power coupling functions are determined according to (16.62),
where kt D 6:65ð1021 . Upstream transmission is assumed to be based on FMT modulation
with bandwidth of the individual subchannels equal to 276 kHz and excess bandwidth of
12.5%; for an efficient implementation, a frequency band of (0, 17.664 MHz) is assumed,
with M D 64 subchannels, of which only 26 are used. For the computation of the achievable
rates, for an error probability of 107 a signal-to-noise ratio gap to capacity equal to
0
0 gap;dB D 0 gap;dB C 6 D 15:8 dB, which includes a 6 dB margin against additional noise
sources that may be found in the DSL environment [17], and G code D 5:5 dB are assumed.
For each of the methods and for given target rates we consider two scenarios: the users
are i) all the same distance L from the central office, and ii) uniformly distributed at ten
different nodes, having distances j L max =10, j D 1; : : : ; 10, from the central office. To
assess the performance of each method, the maximum line length L max is found, such that
all users can reliably achieve a given target rate Rb;target D 13 MBit/s. The achievable rates
are also computed for the case that all users are at the same distance from the central office
and no PBO is applied.
16.4. Multiuser detection for VDSL 1139
Figure 16.10. Achievable rates of individual users versus cable length using the optimum
c 2001 IEEE.]
upstream PBO algorithm for a target rate of 13 Mbit/s. [From [10],
For the optimum algorithm, the achievable rates are computed using (16.63). Further-
more, different subchannel gains may be chosen for the two bands, but transmission gains
within each band are equal. Figure 16.10 shows the achievable rates for each group of
four users with the optimum algorithm for the given target rate. The maximum line length
L max for scenario ii) turns out to be 950 m. For application to scenario ii), the opti-
mum algorithm requires the computation of 20 parameters. Note that for all users at
the same distance from the central office, i.e., scenario i), the optimum algorithm re-
quires the computation of two gains equal for all users. For scenario i), the achievable
rate is equal to the target rate up to a certain characteristic length L max , which corre-
sponds to the length for which the target rate is achieved without applying any PBO.
Also note that L max for scenario ii) is larger than the characteristics length found for
scenario i).
Figure 16.11 illustrates the achievable rates with the average log algorithm (16.69). Joint
optimization of the two parameters K B1 and K B2 for maximum reach under scenario ii)
yields K B1 D 0:02 mW, K B2 D 0:05 mW, and L max D 780 m. By comparison with
Figure 16.10, we note that for the VDSL transmission spectrum plan considered, optimum
upstream PBO leads to an increase in the maximum reach of up to 20%. This increase
depends on the distribution of target rates and line lengths of the users in the network.
At this point, further observations can be made on the application of PBO.
ž Equal upstream services have been assumed for all users. The optimum algorithm
described is even better suited for mixed-service scenarios.
1140 Chapter 16. Applications of interference cancellation
Figure 16.11. Achievable rates of individual users versus cable length using the average log
upstream PBO method for a target rate of 13 Mbit/s. [From [10], c 2001 IEEE.]
ž The application of PBO requires the transmit PSDs of the individual user signals to
be recomputed at the central office whenever one or more users join the network or
drop out of the network.
Figure 16.12. Achievable rates of individual users versus cable length with all interferers
c 2001 IEEE.]
suppressed. [From [10],
Figure 16.13. Achievable rates of individual users versus cable length with ten interferers
suppressed and no PBO applied. [From [10], c 2001 IEEE.]
1142 Chapter 16. Applications of interference cancellation
Figure 16.14. Achievable rates of individual users versus cable length with ten interferers
suppressed and optimum PBO applied for a target rate of 75 Mbit/s. [From [10], c 2001
IEEE.]
Bibliography
[2] D. G. Messerschmitt, “Design issues for the ISDN U-Interface transceiver”, IEEE
Journal on Selected Areas in Communications, vol. 4, pp. 1281–1293, Nov. 1986.
Chapter 17
Wired and wireless network technologies allow users to obtain services that are offered by
various providers, for example, telephony, Internet access at a rate of several Megabit per
second, on demand television programs, and telecommuting. Links make use of channels
that allow full duplex transmission and exhibit sufficient capacity in the two directions of
transmission. Wired network technologies make use of the existing cable infrastructure,
consisting of 1) UTP cables originally laid in the customer service area (local loops) for
access to the public switched telephone network (PSTN) over a telephone channel with band
limited to about 4 kHz; 2) UTP cables installed in buildings or small geographic areas for
the links between stations of a local-area network (LAN); and 3) coaxial cables currently
used to distribute analog TV signals via cable TV networks.
Wireless network technologies have gained widespread popularity among users; for
example, technologies that allow the mobility of users are digital-enhanced cordless telecom-
munications (DECT) (see Chapter 18), personal access communications systems (PACS),
and universal mobile telecommunications systems (UMTS). Wireless LANs have been de-
veloped to extend or replace wired LANs in environments where portable network stations
are needed, allowing users to freely move around. The multichannel multipoint distribu-
tion service (MMDS) and the local multipoint distribution service (LMDS) are proposed as
“wireless cable” networks that can offer high transmission capacity in geographical areas
where the terrain does not present obstacles to the propagation of microwave radio signals.
Hybrid structures are also considered, where the downstream transmission from a central of-
fice to the user and the upstream transmission in the opposite direction are performed using
different transmission media, for example, services provided by direct broadcast satellite
(DBS) in combination with upstream transmission over the PSTN.
Local office
Analog: 0 – 4 kHz
Voiceband end–to–end Voiceband
modem modem
PSTN e.g., V.34: { 2.4 – 28.8 (33.6) kbit/s
Figure 17.1. Illustration of a link between voiceband modems over the PSTN.
PSTN. A modem is defined full duplex if it can transmit and receive simultaneously over the
same telephone channel, or half duplex in the other case (see Section 6.13.1 on page 522).
A link between voiceband modems over the PSTN is illustrated in Figure 17.1.
One of the technologies developed for data transmission over the PSTN is described
by the ITU-T standard V.34. The V.34 modem uses a QAM scheme with 4-dimensional
TCM (see Chapter 12) and flexible precoding for transmission over channels with ISI (see
Chapter 13). We recall that flexible precoding permits achieving coding and shaping gains
with minimum transmit power penalty for arbitrary constellations, provided that the transfer
function of the overall discrete time system does not have zeros on the unit circle. Among
the innovations introduced by the V.34 modem, we recall the initial probing of the channel,
during which the passband is determined; based on the probing results, the symbol rate,
which is in the range from 2400 to 3429 Baud, and the carrier frequency are defined. The
information bit rate for the standard version V.34bis is in the range from 2.4 to 33.6 kbit/s;
the selected value depends on the symbol rate and on the channel signal-to-noise ratio.
The assumption that the link between two modems in the PSTN is completely “analog”
does not lead to an accurate model for the majority of telephone channels available today;
in fact the same network is essentially digital and transports signals that represent speech or
data with a bit rate of 64 kbit/s. If modems used by service providers are linked to the PSTN
by a digital channel that allows data transmission at a bit rate of 64 kbit/s, then only one
“analog” local loop is found between the user and the rest of the network, as illustrated in
Figure 17.2. Modems designed for the channel model just described are usually called PCM
modems and can transmit data at a bit rate of 40–56 kbit/s over channels with a frequency
band that goes from 0 Hz to about 4 kHz (see Section 17.1.1). This technology is described
by the ITU-T standard X.90.
In Table 17.1 the characteristics of some modems are briefly described. For full duplex
modems frequency division duplexing (FDD) or echo cancellation (EC) is used. The column
labelled “ITU-T std” identifies the international standards defined by the International
Telecommunications Union—Telecommunications Standardization Sector. The acronym TC
stands for trellis coding (see Chapter 12); for a description of the various modulation
techniques we refer to Chapter 6.
ADC
PCM codecs
DSP H H DSP
DAC
8–bit x 8 ks/s=64 kbit/s v 7 km
*** Ańm law linear, high res.
Figure 17.2. Illustration of a link between the PCM modem and the server modem.
(see Chapter 4), a reliable link over a local loop is preferable in many cases given the
large number of already installed cables [1, 2, 3]. Figure 17.3 illustrates a link between
two integrated services digital network (ISDN) modems for two rates: basic rate (BR) at
160 kbit/s and primary rate (PR) at 1.544 or 2.048 Mbit/s.
The various digital subscriber line (DSL) technologies (in short xDSL) allow full duplex
transmission between user and central office at bit rates that may be different in the two
directions [4, 5]. For example, the high bit rate digital subscriber line (HDSL) offers a
solution for full duplex transmission at a bit rate of 1.544 Mbit/s, also called T1 rate
1148 Chapter 17. Wired and wireless network technologies
Local office
Figure 17.3. Illustration of links between ISDN modems on the subscriber line (BR D basic
rate and PR D primary rate).
(see Section 6.13), over two twisted pairs and up to distances of 4500 m; the single-line
high-speed DSL (SHDSL) provides full duplex transmission at rates up to 2.32 Mbit/s over
a single twisted pair, up to distances of 2000 m.
A third example is given by the asymmetric digital subscriber line (ADSL) technology
(see Figure 17.4); originally ADSL was proposed for the transmission of video-on-demand
signals; later it emerged as a technology capable of providing a large number of services.
For example, ADSL-3 is designed for downstream transmission of four compressed video
signals, each having a bit rate of 1.5 Mbit/s, in addition to the full duplex transmission of
a signal with a bit rate of 384 kbit/s, a control signal with a bit rate of 16 kbit/s, and an
analog telephone signal, up to distances of 3600 m.
A further example is given by the very high-speed DSL (VDSL) technology, mainly
designed for the fiber-to-the-curb (FTTC) architecture. The considered data rates are up
to 26 Mbit/s downstream i.e. from the central office or optical network unit to the remote
terminal, and 4.8 Mbit/s upstream for asymmetric transmission, and up to 14 Mbit/s for
symmetric transmission, up to distances not exceeding a few hundred meters. Figure 17.5
illustrates the FTTC architecture, where the link between the user and an optical network
unit (ONU) is obtained by a UTP cable with maximum length of 300 m, and the link
between the ONU and a local central office is realized by optical fiber; in the figure, links
between the user and the ONU that are realized by coaxial cable or optical fiber are also
indicated, as well as the direct link between the user and the local central office using
optical fiber with a fiber-to-the-home (FTTH) architecture.
Different baseband and passband modulation techniques are considered for high speed
transmission over UTP cables in the customer service area. For example, the Study Group
T1E1.4 of Committee T1 chose 2B1Q quaternary PAM modulation (see Example 6.5.1 on
page 479) for HDSL, and DMT modulation (see Chapter 9) for ADSL. Among the organi-
zations that deal with the standardization of DSL technologies we also mention the Study
Group TM6 of the European Telecommunications Standard Institute (ETSI) and the Study
Group 15 of the ITU-T. Table 17.2 summarizes the characteristics of DSL technologies;
spectral allocations of the various signals are illustrated in Figure 17.6.
17.1. Wired network technologies 1149
Local office
18 kfeet = 5.4 km
ADSL–1
modem
Optical network unit
³ 3.152 Mbit/s,
ONU ² n 64 kbit/s
Fiber ADSL–2
ADSL–2
modem modem
12 kfeet = 3.6 km
6 kfeet = 1.8 km
Figure 17.4. Illustration of links between ADSL modems over the subscriber line.
We now discuss more in detail the VDSL technology. Reliable and cost effective VDSL
transmission at a few tens of Megabit per second is made possible by the use of frequency-
division duplexing (FDD) (see Section 6.13), which avoids signal disturbance by near-end
cross-talk (NEXT), a particularly harmful form of interference at VDSL transmission fre-
quencies. Ideally, using FDD, transmissions on neighboring pairs within a cable binder
couple only through far-end cross-talk (FEXT) (see also Section 16.4), the level of which
is significantly below that of NEXT. In practice, however, other forms of signal coupling
come into play because upstream and downstream transmissions are placed spectrally as
close as possible to each other in order to avoid wasting useful spectrum. Closely packed
transmission bands exacerbate interband interference by echo and NEXT from similar sys-
tems (self-NEXT), possibly leading to severe performance degradation. Fortunately, it is
possible to design modulation schemes that make efficient use of the available spectrum and
simultaneously achieve a sufficient degree of separation between transmissions in opposite
directions by relying solely on digital signal processing techniques. This form of FDD is
sometimes referred to as digital duplexing.
The concept of “divide and conquer” has been used many times to facilitate the solution
of very complex problems; therefore it appears unavoidable that digital duplexing for VDSL
will be realized by the sophisticated version of this concept represented by multicarrier
transmission, although single carrier methods have been also proposed. As discussed in
1150 Chapter 17. Wired and wireless network technologies
Local office
51.84 Mbit/s,
Very high speed DSL (VDSL)
N x 64 kbit/s
VDSL
modem
ONU
VDSL
modem
FTTC 300 m
Fiber Coax
Cable Cable
modem modem
Fiber
modem
Fiber
Fiber
modem
FTTH
Fiber
Fiber
modem
(a)
(b)
Section 9.5, various variants of multicarrier transmission exist. The digital duplexing method
for VDSL known as Zipper [6] is based on discrete-multitone (DMT) modulation; here we
consider filtered multitone (FMT) modulation (see Section 9.5), which involves a different
set of trade-offs for achieving digital duplexing in VDSL and offers system as well as
performance advantages over DMT [7, 8].
1152 Chapter 17. Wired and wireless network technologies
The key advantages of FMT modulation for VDSL can be summarized as follows.
First, there is flexibility to adapt to a variety of spectrum plans for allocating bandwidth
for upstream and downstream transmission by proper assignment of the subchannels. This
feature is also provided by DMT modulation, but not as easily by single-carrier modulation
systems. Second, FMT modulation allows a high-level of subchannel spectral containment
and thereby avoids disturbance by echo and self-NEXT. Furthermore, disturbance by a
narrowband interferer, e.g., from AM or HAM radio sources, does not affect neighboring
subchannels as the side lobe filter characteristics are significantly attenuated. Third, FMT
modulation does not require synchronization of the transmissions at both ends of a link or
at the binder level, as is sometimes needed for DMT modulation.
As an example of system performance, we consider the bit rate achievable for different
values of the length of a twisted pair, assuming symmetrical transmission at 22.08 MBaud
and full duplex transmission by FDD based on FMT modulation. The channel model is
obtained by considering a line with attenuation equal to 11.1 dB/100 m at 11.04 MHz (see
Section 4.4.1), with 49 near-end cross-talk interference signals, 49 far-end cross-talk in-
terference signals, and additive white Gaussian noise with a PSD of 140 dBm/Hz. The
transmitted signal power is assumed equal to 10 dBm. The FMT system considered here
employs the same linear-phase prototype filter for the realization of transmit and receive
polyphase filter banks, designed for M D 256, K D 288, and D 10; we recall that
with these values of M and K the excess bandwidth within each subchannel is equal
to 12.5%. Per-subchannel equalization is obtained by a Tomlinson–Harashima precoder
(see Section 13.3.1) with 9 coefficients at the transmitter and a fractionally spaced lin-
ear equalizer with 26 T =2 spaced coefficients at the receiver. With these parameter val-
ues, and using the bit loading technique of Section 13.2, the system achieves bit rates
of 24.9 Mbit/s, 10.3 Mbit/s, and 6.5 Mbit/s for the three lengths of 300 m, 1000 m, and
1400 m, respectively [9]. We refer to Section 16.4 for a description of the general case
where users are connected at different distances from the central office and power control
is applied.
order to meet limits on emitted radiation, the signal band must be confined to frequencies
below 30 MHz and sophisticated signal processing techniques are required to obtain reliable
transmission.
Standards for Ethernet networks that use the carrier sense multiple access with collision
detection (CSMA/CD) protocol are specified by the IEEE 802.3 Working Group for differ-
ent transmission media and bit rates. With the CSMA/CD protocol, a station can transmit
a data packet only if no signal from other stations is being transmitted on the transmis-
sion medium. As the probability of collision between messages cannot be equal to zero
because of the signal propagation delay, a transmitting station must continuously monitor
the channel; in the case of a collision, it transmits a special signal called a jam signal to
inform the other stations of the event, and then stops transmission. Retransmission takes
place after a random delay. The 10BASE-T standard for operations at 10 Mbit/s over two
unshielded twisted pairs of category 3 or higher defines one of the most widely used im-
plementations of Ethernet networks; this standard considers conventional mono duplex (see
Section 16.1) transmission, where each pair is utilized to transmit only in one direction us-
ing simple Manchester line coding (see Appendix 7.A) to transmit data packets, as shown
in Figure 17.7. Transmitters are not active outside of the packets transmission intervals,
except for transmission of a signal called link beat that is occasionally sent to assure the
link connection.
The request for transmission speeds higher than 10 Mbit/s motivated the IEEE 802.3
Working Group to define standards for fast Ethernet that maintain the CSMA/ CD protocol
and allow transmission at 100 Mbit/s and above. For example, the 100BASE-FX standard
defines a physical layer (PHY) for Ethernet networks over optical fibers. The 100BASE-TX
standard instead considers conventional mono duplex transmission over two twisted pairs
of category 5; the bit rate of 100 Mbit/s is obtained by transmission with a modulation rate
of 125 MBaud and multilevel transmission (MLT-3) line coding combined with a channel
code with rate 4/5 and scrambling, as illustrated in Figure 17.8. We also mention the
100BASE-T4 standard, which considers transmission over four twisted pairs of category 3;
the bit rate of 100 Mbit/s is obtained in the following way by using an 8B6T code with a
1 0 0 1 1 0 1
Idle Idle
0 1/Tb = 10 MHz f
MLT–3 coding
+2
0
0
1 0 1
–2
0
T = 8 ns
–2 +2
0 Spectrum
1 0 1
0
0 62.5 MHz f
(a)
Medium
independent
Physical Physical
interface
medium medium
(MII)
dependent dependent
(PMI) Cat. 5 (PMD)
100 Mbit/s data
or control info. 125 Mbit/s 125 MBaud
MLT–3
Symbol
MLT–3
encoder
(rate–4/5) encoder
Scrambling
sequence 2–pair UTP
generator
Symbol
MLT–3
decoder
(rate–4/5) decoder
Scrambling
sequence
generator Clock
recovery
(b)
Figure 17.8. Illustration of (a) 100BASE-TX signal characteristics and (b) 100BASE-TX
transceiver block diagram.
modulation rate equal to 25 MBaud: on the first two pairs data are transmitted at a bit rate
of 33.3 Mbit/s in half duplex fashion, while on the two remaining pairs data are transmitted
at 33.3 Mbit/s in mono duplex fashion.
A further version of fast Ethernet is represented by the 100BASE-T2 standard, which
allows users of the 10BASE-T technology to increase the bit rate from 10 to 100 Mbit/s
without modifying the cabling from category 3 to 5, or using four pairs for a link over
UTP-3 cables. The bit rate of 100 Mbit/s is achieved with dual duplex transmission over two
twisted pairs of category 3, where each pair is used to transmit in the two directions (see
Figure 17.9). The 100BASE-T2 standard represents the most advanced technology for high
17.1. Wired network technologies 1155
Tx Tx
pair 1
Tx Tx
Rc Rc
pair 2
Rc Rc
0 1 1 1 0 J K 1 1
111111
000000
111111
000000
111111
000000
Tb = 62.5 ns
Spectrum
Figure 17.10. Illustration of signal characteristics for transmission over token ring networks.
speed transmission over UTP-3 cables in LANs; the transceiver design for 100BASE-T2
will be illustrated in Chapter 19.
Other important examples of LANs are the token ring and the fiber distributed data
interface (FDDI) networks; standards for token ring networks are specified by the IEEE
802.5 Working Group, and standards for FDDI networks are specified by ANSI. In these
networks the access protocol is based on the circulation of a token. A station is allowed
to transmit a data packet only after having received the token; once the transmission has
been completed, the token is passed to the next station. The IEEE 802.5 standard specifies
operations at 16 Mbit/s over two unshielded twisted pairs of category 3 or higher, with
mono duplex transmission. For the transmission of data packets Manchester line coding
is adopted; the token is indicated by coding violations, as illustrated in Figure 17.10. The
ANSI standard for FDDI networks specifies a physical layer for mono duplex transmission
at 100 Mbit/s over two unshielded twisted pairs of category 5 identical to that adopted for
the Ethernet 100BASE-TX standard.
Table 17.3 summarizes the characteristics of existing standards for high speed transmis-
sion over UTP cables.
1156 Chapter 17. Wired and wireless network technologies
Fiber node
Head–end FN
controller
HC FN
Trunk = Fiber Feeder = Coaxial cable
FN
Termination
Splitter
Bidirectional
Tap split–band
amplifier
Drop=
coaxial cable S
v 70 km
Station
v 10 km
Upstream frequencies
Downstream frequencies
Switched digital video
Analog and digital HDTV channels Data and
services, delivered
delivered to all of network telephony
individually to each user
5 40 54 450 650 750 MHz
750 MHz
for transmission in the downstream direction, with a bit rate in the range from 30 to 45 Mbit/s
[13]; by these schemes spectral efficiencies of 5–8 bit/s/Hz are therefore obtained.
In the upstream direction the implementation of PHY and MAC layers is considerably
more difficult than in the downstream. In fact, we can make the following observations:
ž signals are transmitted in bursts from the stations to the HC; therefore it is necessary
for the HC receiver to implement fast synchronization algorithms;
ž signals from individual stations must be received by the HC at well-defined instants
of arrival and power levels; therefore, procedures are required for the determination
1158 Chapter 17. Wired and wireless network technologies
of the round-trip delay between the HC and each station, as well as for the control
of the power of the signal transmitted by each station, as channel attenuation in the
upstream direction may present considerable variations, of the order of 60 dB;
ž the upstream channel is usually disturbed by impulse noise and narrowband interfer-
ence signals; moreover, the distortion level is much higher than in the downstream
channel.
Interference signals in the upstream channel are caused by domestic appliances and HF
radio stations; these signals accumulate along the paths from the stations to the HC and
exhibit time-varying characteristics; they are usually called ingress noise. Because of the
high level of disturbance signals, the spectral efficiency of the upstream transmission is
limited to 2–4 bit/s/Hz.
The noise spectrum suggests that the upstream transmission is characterized by the
possibility of changing the frequency band of the transmitted signal (frequency agility), and
of selecting different modulation rates and spectral efficiencies. In [12], a QAM scheme for
upstream transmission that uses a 4 or 16 point constellation and a maximum modulation
rate of 2.56 MBaud is defined; the carrier frequency, modulation rate and spectral efficiency
are selected by the HC and transmitted to the stations as MAC information.
mini slots
Figure 17.13. Example of a MAP message [12]. [Reproduced with permission of Cable
Television Laboratories, Inc.]
maintenance interval is equivalent to the maximum round-trip delay plus the transmis-
sion time of a RNG-REQ message. At the instant specified in the MAP message, the
station sends a first RNG-REQ message using the lowest power level of the transmit-
ted signal, and is identified by a SID equal to zero as a station that requests to join the
network.
If the station does not receive a response within a pre-established time, it means that
a collision occurred between RNG-REQ messages sent by more than one station, or
that the power level of the transmitted signal was too low; to reduce the probability of
repeated collisions, a collision resolution protocol is used with random back-off. After
the back-off time interval, of random duration, is elapsed, the station waits for a new
MAP message containing an IE of initial maintenance and at the specified instant re-
transmits a RNG-REQ message with a higher power level of the transmitted signal. These
steps are repeated until the HC detects a RNG-REQ message, from which it can deter-
mine the round-trip delay and the correction of the power level that the station must
apply for future transmissions. In particular, the compensation for the round-trip delay
is computed so that, once applied, the transmitted signals from the station arrive at the
HC at well-defined time instants. Then the HC sends to the station, in a ranging re-
sponse RNG-RSP message, the information on round-trip delay compensation and power
level correction to be used for future transmissions; this message also includes a tempo-
rary SID.
The station waits for a MAP message containing an IE of station maintenance, indi-
vidually addressed to it by its temporary SID, and in turn responds through a RNG-REQ
message, signing it with its temporary SID and using the specified round-trip delay com-
pensation and power level correction; next, the HC sends another RNG-REQ message to
the station with information for a further refinement of round-trip delay compensation and
power level correction. The steps of ranging request/response are repeated until the HC
sends a ranging successful message; at this point the station can send a registration request
1160 Chapter 17. Wired and wireless network technologies
Error
re–initialize MAC
RNG–REQ
Wait for
RNG–RSP
Error
re–initialize MAC
Wait for broadcast RNG–REQ
maintenance
opportunity
Wait for
RNG–RSP
Figure 17.14. Finite state machine used by the cable modem for the registration proce-
dure [12]. [Reproduced with permission of Cable Television Laboratories, Inc.]
17.2. Wireless network technologies 1161
Wait for
detectable
RNG–REQ
RNG–REQ
RNG–RSP
Wait for
polled RNG–REQ
RNG–REQ
not received RNG–REQ
Yes No Yes
RNG–RSP RNG–RSP RNG–RSP
(abort ranging) (continue) (success)
Figure 17.15. Finite state machine used by the HC for the registration procedure [12].
[Reproduced with permission of Cable Television Laboratories, Inc.]
Table 17.4 Scheme summarizing the main characteristics of high tier and low tier
technologies.
Fixed part
RJ
11
”Portable” part
(cordless
terminal adapter)
NI
Network interface
Figure 17.16. Illustration of the utilization of DECT in the wireless local loop.
technologies are summarized in Table 17.4. Appendix 17.A describes some of the widely
adopted wireless technologies listed in Table 17.4.
Low tier technologies are designed to achieve a quality of service (QoS) similar to
that offered by wired networks. DECT in Europe and PACS in the United States were
developed as technologies for the wireless local loop (WLL) to provide communication
services also to mobile users [14, 15]. DECT was originally developed for application in
wireless private branch exchanges (wireless PBXs) with low user mobility, and later ex-
tended as a technology for the WLL, as illustrated in Figure 17.16. PACS is illustrated in
Figure 17.17.
Radio port
RJ
11
Wireless access
fixed unit
Switch
Radio port
controller NI
unit
Network interface
Figure 17.17. Illustration of the utilization of PACS in the wireless local loop.
2400–2483.5 MHz, and 5725–5850 MHz; these networks are also called RF WLANs to
distinguish them from the IR WLANs that use the infrared (IR) frequency band. Speci-
fications for PHY and MAC layers of WLANs are developed by various standardization
organizations, among which we cite the IEEE 802.11 Working Group and the European
Telecommunications Standard Institute (ETSI).
The region within which mobile stations have the possibility of exchanging information
with the network is divided into cells (see Figure 17.24). To reduce interference, neighboring
cells use different frequencies. Within each cell, an access station or base station allocates
frequencies and guarantees to the mobile stations the possibility of accessing fixed networks
over metallic cables or optical fibers (see Figure 17.18).
In the United States WLANs are allowed to operate in the IMS frequency bands with-
out needing a license from the FCC, which, however, sets restrictions on the power of
the radio signal that must be less than 1 W and specifies that spread-spectrum technol-
ogy (see Chapter 10) must be used whenever the signal power is larger than 50 mW.
Most RF WLANs employ direct sequence or frequency hopping spread-spectrum systems;
1164 Chapter 17. Wired and wireless network technologies
WLANs that use narrowband modulation systems usually operate in the band around
5.8 GHz, with a transmitted signal power lower than 50 mW, in compliance with FCC
regulations. The coverage radius of RF WLANs is typically of the order of a few hundred
meters.
IR WLANs usually employ PAM-DSB modulation (see Appendix 7.C) with values of
the carrier frequency of the order of the frequency of the infrared radiation, as also typically
adopted in a fiber optic link; this system allows a simple and low cost implementation of
the PHY layer [16]. However, it is necessary to consider that non-negligible interference
signals can be generated by infrared radiation sources, like the sun or lighting systems. Two
methods are usually considered in designing IR WLANs. In the first method the signal is
transmitted along a well-defined direction; in this case an IR system can also be used
outdoor with a coverage radius of a few kilometers. In the second method, the signal is
irradiated in all directions and the coverage radius is limited to about 10 m.
Medium access control protocols
Unlike cabled LANs, WLANs operate over channels with multipath fading, and channel
characteristics typically vary over short distances. Channel monitoring to determine whether
other stations are transmitting requires a larger time interval than that required by a similar
operation in cabled LANs; this translates into an efficiency loss of the CSMA protocols,
whenever they are used without modifications. The MAC layer specified by the IEEE 802.11
Working Group is based on the CSMA protocol with collision avoidance (CSMA/CA), in
which four successive stages are foreseen for the transmission of a data packet, as illustrated
in Figure 17.19 [17].
CTS
DATA
ACK
The CSMA/CA principle is simple. All mobile stations that have packets to transmit
compete for channel access by sending ready to transmit (RTS) messages by a CSMA
protocol. If the base stations recognize a RTS message sent by a mobile station, it sends a
clear to transmit (CTS) message to the same mobile station and this one transmits the packet;
if reception of this packet occurs correctly, then the base station sends an acknowledgement
(ACK) message to the mobile station. With CSMA/CA, the only possibility of collision
will occur during the RTS stage of the protocol; however, we note that also the efficiency
of this protocol is reduced with respect to that of the simple CSMA/CD because of the
presence of the RTS and CTS stages.
As mobility is allowed, the number of mobile stations that are found inside a cell can
change at any instant. Therefore it is necessary that each station informs the others of its
presence as it moves around. A protocol used to solve this problem is the so-called hand-off,
or hand-over, protocol, which can be described as follows:
ž a switching station, or all base stations with a coordinated operation, registers the
information relative to the signal levels of all mobile stations inside each cell;
ž if a mobile station M is serviced by base station B1, but the signal level of station M
becomes larger if received by another base station B2, the switching station proceeds
to a hard hand-off operation whose final result is that mobile station M is considered
part of the cell covered by base station B2.
head end
fiber
central office/
A1
B3 B1
A3 A2
B2
are considered; the two sets include the entire frequency band for downstream transmission
and they are reused three times, thus tripling the system capacity with respect to the case
without “sectorization”. The required rejection level of secondary lobes can be reduced
by the combination of “sectorization” with orthogonal polarization of adjacent sectors; we
note, however, that orthogonal polarization can be effectively introduced only in the case
of signal propagation in the absence of multipath, as in point-to-point links over short
distances, using at each end of the link antennas placed at a large height with respect
17. Bibliography 1167
to the ground. For residential area installations where roof top antennas are used, usually
multipath components due to reflected signals from the ground and surrounding buildings
are present (see Section 4.6); the reflected signals may have components with orthogonal
polarity with respect to the desired signal and this phenomenon introduces interference.
In the design of transmission systems for LMDS it is necessary to consider the fact that,
even if the transmitter and receiver are in line of sight, the effect of traffic and foliage
movement determines a transmission channel with very hostile fading characteristics; for
example, it is common to encounter in these environments a Doppler spread of 2 Hz or less,
with a fade of over 40 dB. To obtain a reliable transmission for LMDS and to overcome
effects due to the multipath, in addition to sectorized directional antenna systems various
techniques may be considered, such as:
ž frequency diversity,
Bibliography
[1] S. V. Ahamed, P. L. Gruber, and J.-J. Werner, “Digital subscriber line (HDSL and
ADSL) capacity of the outside loop plant”, IEEE Journal on Selected Areas in Com-
munications, vol. 13, pp. 1540–1549, Dec. 1995.
[2] W. Y. Chen and D. L. Waring, “Applicability of ADSL to support video dial tone in
the copper loop”, IEEE Communications Magazine, vol. 32, pp. 102–109, May 1994.
[3] G. T. Hawley, “Systems considerations for the use of xDSL technology for data
access”, IEEE Communications Magazine, vol. 35, pp. 56–60, Mar. 1997.
[4] W. Y. Chen and D. L. Waring, “Applicability of ADSL to support video dial tone in
the copper loop”, IEEE Communications Magazine, vol. 32, pp. 102–109, May 1994.
[7] G. Cherubini, E. Eleftheriou, S. Ölçer, and J. M. Cioffi, “Filter bank modulation tech-
niques for very high-speed digital subscriber lines”, IEEE Communications Magazine,
vol. 38, pp. 98–104, May 2000.
1168 Chapter 17. Wired and wireless network technologies
R
[8] G. Cherubini, E. Eleftheriou, and S. Olcer, “Filtered multitone modulation for very-
high-speed digital subscriber lines”, IEEE Journal on Selected Areas in Communica-
tions, June 2002.
R
[9] G. Cherubini, E. Eleftheriou, and S. Olcer, “Filtered multitone modulation for VDSL”,
in Proc. IEEE GLOBECOM ’99, Rio de Janeiro, Brazil, pp. 1139–1144, Dec. 1999.
[12] D. Fellows and D. Jones, “DOCSIS(tm) ”, IEEE Communications Magazine, vol. 39,
pp. 202–209, Mar. 2001.
[13] “Digital multi-programme systems for television sound and data services for cable
distribution”, ITU-T Recommendation J.83, ITU-T Study Group 9, Oct. 24 1995.
[16] F. Gfeller and W. Hirt, “A robust wireless infrared system with channel reciprocity”,
IEEE Communications Magazine, vol. 36, pp. 100–106, Dec. 1998.
[17] S. Singh, “Wireless LANs”, in The Mobile Communications Handbook (J. D. Gibson,
ed.), ch. 34, pp. 540–552, New York: CRC/IEEE Press, 1996.
[18] R. O. LaMaire, A. Krishna, P. Bhagwat, and J. Panian, “Wireless LANs and mo-
bile networking: standards and future directions”, IEEE Communications Magazine,
vol. 34, pp. 86–94, Aug. 1996.
[19] M. Rahnema, “Overview of the GSM system and protocol architecture”, IEEE Com-
munications Magazine, vol. 31, pp. 92–100, Apr. 1993.
[22] R. Pirhonen, T. Rautava, and J. Penttinen, “TDMA convergence for packet data ser-
vices”, IEEE Personal Communications Magazine, vol. 6, pp. 68–73, June 1999.
17. Bibliography 1169
[23] A. Furuskar, S. Mazur, F. Muller, and H. Olofsson, “EDGE: enhanced data rates for
GSM and TDMA/136 evolution”, IEEE Personal Communications Magazine, vol. 6,
pp. 56–66, 1999.
[24] N. R. Sollenberger, N. Seshadri, and R. Cox, “The evolution of IS-136 TDMA for
third-generation wireless services”, IEEE Personal Communications Magazine, vol. 6,
pp. 8–18, 1999.
Wireless systems
Although the trend, through third generation universal mobile telecommunications service
(UMTS) systems, is to “merge” the different type of services into one large structure, a
distinction must be made between two classes of wireless systems: voice-oriented systems
and data-oriented systems. Figure 17.22 illustrates this aspect.
From this scheme we note that both classes of systems can be further subdivided into
two categories. Systems for voice transmission can be cordless, that is with low transmitted
power and local area services [18], or cellular, that is with high transmitted power and
wide area services. A similar distinction can be made for data transmission services: the
wireless local area networks (WLANs) are low power systems with short range coverage,
whereas the mobile-data networks are high power systems with long range coverage (see
Sections 17.2 and 17.2.1).
We will discuss here cellular systems, specifically the standards GSM, IS-136, JDC, and
IS-95; we will also address the standard DECT for cordless systems and HIPERLAN for
WLANs.
Modulation techniques
The current second generation wireless systems are digital and use different types of mod-
ulation according to the required performance. For a discussion of modulation techniques
for radio transmission systems, see Chapter 18.
Using a QPSK (or a BPSK) system with a square root raised cosine transmit pulse yields
a good trade-off between level of ISI and required bandwidth; in this case, however, the
modulated signals do not have a constant envelope, which may represent a problem in the
presence of a high power amplifier. An alternative is given by GMSK, whose basic scheme
is given in Figure 18.45; in this case the required bandwidth increases for higher values of
Bt T , where Bt is the bandwidth of the Gaussian filter; the choice of the parameter Bt T is
a compromise between pulse duration and system bandwidth.
Figure 17.23 illustrates estimated power spectra of baseband QPSK and GMSK signals
for two values of Bt T . The frequency is normalized by bit rate 1=Tb of the system; in this
way it is easy to realize that the bandwidth efficiency of QPSK is larger than that of GMSK.
Figure 17.23. Comparison between the PSD of QPSK and GMSK signals.
1172 Chapter 17. Wired and wireless network technologies
1. Bandwidth efficiency or spectral efficiency: it is the ratio between the bit rate 1=Tb
and the utilized bandwidth.
2. Power efficiency: for the same 1=Tb and receiver complexity, the lower the transmit-
ted power for a certain bit error probability, the greater the efficiency.
3. Out-of-band radiation: it is the power outside of the main lobe of the power spec-
trum.1
4. Robustness to multipath: it expresses the insensitivity of the system to multipath
channels.
5. Constant envelope: the use of power-efficient class C amplifiers, because of their
strong non-linearity, requires input signals with a constant envelope.
1 For example, MSK has low bandwidth efficiency and high out-of-band radiation; the Gaussian filter of a
GMSK system attempts to mitigate both these problems.
17.A. Standards for wireless systems 1173
Figure 17.24. Illustration of cell and frequency reuse concepts; cells with the same letter are
assigned the same carrier frequencies.
ž Telephony service: digital telephone service with guarantee of service to users that
moves at a speed of up to 250 km/h;
ž Data service: can realize the transfer of data packets with bit rates in the range from
300 to 9600 bit/s.
ž ISDN service: some services, as the identification of a user that sends a call and the
possibility of sending short messages (SMS), are realized by taking advantage of the
integrated services digital network (ISDN), whose description can be found in [20].
A characteristic of the GSM system is the use of the subscriber identity module (SIM)
card together with a four-digit number (ID); inserting the card in any mobile terminal, it
identifies the subscriber who wants to use the service. Important is also the protection of
privacy offered to the subscribers of the system.
Figure 17.25 represents the structure of a GSM system, that can be subdivided into
three subsystems. The first subsystem, composed of the set of BTSs and mobile termi-
nals or mobile stations (MSs), is called a radio subsystem; it allows communication be-
tween the MSs and the mobile switching center (MSC), that coordinates the calls and
1174 Chapter 17. Wired and wireless network technologies
also other system control operations. To a MSC are linked many base station controllers
(BSCs); each BSC is linked up to several hundreds BTSs, each of which identifies a
cell and directly realizes the link with the mobile terminal. The hand-off procedure be-
tween two BTSs is assisted by the mobile terminal in the sense that it is a task of
the MS to establish at any instant which BTS is sending the “strongest” signal. In the
case of the hand-over between two BTS linked to the same BSC, the entire procedure
is handled by the BSC itself and not by the MSC; in this way the MSC can save many
operations.
The second subsystem is the network switching subsystem (NSS), that in addition to the
MSC includes:
ž the home location register (HLR): this is a database that contains information regard-
ing subscribers who reside in the same geographical area as the MSC;
ž the visitor location register (VLR): this is a database that contains information re-
garding subscribers that are temporarily under the control of the MSC, but do not
reside in the same geographical area;
17.A. Standards for wireless systems 1175
ž the authentication center (AUC): this controls codes and other information for correct
communications management;
ž the operation maintenance centers (OMC): they take care of the proper functioning
of the various blocks of the structure.
Finally, the MSC is directly linked to the public networks: PSTN for telephone services,
ISDN for particular services as SMS, and data network for the transmission of data packets.
Radio subsystem
We now give some details with regard to the radio subsystem. The total bandwidth al-
located for the system is 50 MHz; frequencies that go from 890 to 915 MHz are reserved
for MS-BTS communications, whereas the bandwidth 935–960 MHz is for communications
in the opposite direction.2 In this way a full-duplex communication by frequency division
duplexing (FDD) is realized. Within the total bandwidth there are 248 carriers allocated,
that identify as many frequency channels called ARFCN; of these 124 are for uplink com-
munications and 124 for downlink communications. The separation between two adjacent
carriers is 200 kHz; the bandwidth subdivision is illustrated in Figure 17.26. Full-duplex
communication is achieved by assigning two carriers to the user, one for transmission and
one for reception, such that they are about 45 MHz apart.
Each carrier is used for the transmission of an overall bit rate Rb of 270.833 kbit/s,
corresponding to a bit period Tb D 3:692 µs. The system employs GMSK modulation with
parameter Bt T equal to 0.3; the aim is to have a power-efficient system. However, the
bandwidth efficiency is not very high; in fact we have
270:833
¹D ' 1:354 bit/s/Hz (17.1)
200
which is smaller than that of other systems.
Besides this FDM structure, there is also a TDMA structure; each transmission is divided
into eight time intervals, or time slots, that identify the TDM frame. Figure 17.27 shows
the structure of a frame as well as that of a single time slot.
2 There also exists a version of the same system that operates at around the frequency of 1.8 GHz (in the USA
the frequency is 1.9 GHz).
1176 Chapter 17. Wired and wireless network technologies
Figure 17.27. TDM frame structure and slot structure of the GSM system.
As a slot is composed of 156.25 bits (not an integer number because of the guard time
equal to 8.25 bits), its duration is about 576.92 µs; therefore the frame duration is about
4.615 ms. In Figure 17.27 it is important to note the training sequence of 26 bits, used to
analyze the channel by the MS or BS. The flag bits signal if the 114 information bits are for
voice transmission or for control of the system. Finally, the tail bits indicate the beginning
and the end of the frame bits.
Although transmissions in the two directions occur over different carriers, to each com-
munication is dedicated a pair of time slots spaced 4 slots apart (one for the transmit station
and one for the receive station); for example, the first and fifth or the second and sixth, etc.
Considering the sequence of 26 consecutive frames, that have a duration of about 120 ms,
the 13th and 26th frames are used for control; then in 120 ms a subscriber can transmit (or
receive) 114 Ð 24 D 2736 bits, which corresponds to a bit rate of 22.8 kbit/s. Indeed, the net
bit rate of the message can be 2.4, 4.8, 9.6, or 13 kbit/s. Redundancy bits are introduced
by the channel encoder for protection against errors, so that we get a bit rate of 22.8 kbit/s
in any case.
The original speech encoder chosen for the system was a RELP (see Chapter 5), improved
by a long-term predictor (LTP), with a bit rate of 13 kbit/s. The use of a voice activity
detector (VAD) allows an improvement in system capacity by reducing the bit rate to
a minimum value during silence intervals in the speech signal. For channel coding, a
convolutional encoder with code rate 1/2 is used. In [20] the most widely used speech and
channel encoders are described, together with the data interleavers.
In Figure 17.28 a scheme is given that summarizes the protection mechanism against
errors used by the GSM system. The speech encoder generates, in 20 ms, 260 bits; as the
17.A. Standards for wireless systems 1177
bit rate per subscriber is 22.8 kbit/s, in 20 ms 456 bits must then be generated, introducing
a few redundancy bits.3
To achieve reliable communications in the presence of multipath channels with delay
spread up to 16 µs, at the receiver equalization by a DFE and/or detection by the Viterbi
algorithm are implemented.
In [20, 21] the calling procedure followed by the GSM system and other specific details
are fully described.
GSM-EDGE
Recently [22, 23], to support services with higher bit rates, the enhanced data for GSM
evolution (EDGE) system was introduced, which employs 8-PSK, with a pulse represented
in Figure 18.57, in place of GMSK. The bit rate is almost tripled to 69.2 kbit/s per subscriber;
the channel encoder is also modified.
3 The described slot structure and coding scheme refer to the transmission of user information, namely speech.
Other types of communications, as for the control and management of the system, use different coding schemes
and different time slot structures (see [21]).
1178 Chapter 17. Wired and wireless network technologies
The structure of the system (see Figure 17.25) and types of services provided are similar
to those of GSM; even IS-136 uses a combination of TDMA and FDMA, although with
different specifications.
A first substantial difference between the two systems is given by the modulation adopted:
IS-136 uses a ³=4-DQPSK with a roll-off factor of 0.35. A considerable improvement in
the bandwidth efficiency is thus obtained. The bandwidth efficiency is given by
48:6
¹D ' 1:62 bit/s/Hz (17.2)
30
where the numerator is given by the bit rate per carrier and the denominator by the spacing
between carriers. The price for a better bandwidth efficiency is a decrease of the power
efficiency with respect to the GSM.
We examine some parameters used by the system. IS-136 has also a bandwidth of
50 MHz; the band 824–849 MHz (or 1850–1865 MHz) is used for the MS-BS transmissions,
whereas the band 869–894 MHz (or 1930–1945 MHz) is reserved for communications in the
reverse direction; again, the FDD technique is used to achieve full-duplex communication.
Moreover, as in GSM, a combination of FDMA and TDMA is used to realize multiple
access; the overall bandwidth is divided into sub-bands of 30 kHz, each carrying an overall
bit rate of 48.6 kbit/s, that corresponds to a bit period of Tb ' 20:576 µs.
The structure of the TDM frame is illustrated in Figure 17.29, together with that of a
single slot, which can be of two types, depending on whether it is used for communication
from the mobile station to the base or vice versa.
The frame is divided into 6 time slots, and each slot is composed of 324 bits, for a
duration of about 6.667 ms, which means that the frame duration is of about 40 ms. As for
GSM, for full-duplex communication two carriers, separated by about 45 MHz, are assigned:
one for the MS-BS communication and the other for the reverse. This time, however, 2
time slots are assigned to the user, such that at most 3 users4 can use the same frame.
We see from Figure 17.29 that, of the overall 324 bits per slot, only 260 bits are effective
information, and the remaining serve as control and signaling for the system. Then in 40 ms
there are only 260 Ð 2 D 520 information bits (transmitted or received), which corresponds
to a bit rate of 13 kbit/s. The voice encoder is a vector-sum excitation linear predictive
(VSELP) at 7.95 kbit/s, that generates only 159 bits in 20 ms (half a frame period). However,
we have seen that in 20 ms there are 520=2 D 260 transmitted bits, hence the difference,
that is 101 bits, is constituted by redundancy bits added by the channel encoder.
Figure 17.30 illustrates channel coding for voice messages; for control messages the
organization is different. The mechanism is similar to that of GSM; the 159 bits, generated
by the voice encoder in 20 ms, are divided into two classes: the first comprises the 77
most significant bits (divided into a group of 65 bits and one of 12 bits), the second, the
remaining 82 bits. After the procedure illustrated in the figure, the bits become 260, for a
bit rate of 13 kbit/s.
The type of demodulator/equalizer is not specified. For channels with a rms delay spread
greater than 4:12 µs, a RLS adaptive DFE was proposed [20].
For information on other characteristics of the IS-136 system and on its evolution, as
well as on convergence with GSM-EDGE, we recommend references [24, 25].
4 In the GSM, 8 users share the same frame; in fact, to each user is assigned only one slot.
17.A. Standards for wireless systems 1179
Figure 17.29. TDM frame structure and slot structure for the IS-136 system.
Figure 17.30. Channel coding for voice messages of the IS-136 system.
1180 Chapter 17. Wired and wireless network technologies
5 Within the total bandwidth, each of the two systems should use only a portion of the spectrum so that they
do not interfere with each other.
6 In fact, the effective rate is from 1 to 8 kbit/s and the higher rates are due to the fact that additional redundancy
bits have already been included.
17.A. Standards for wireless systems 1181
transmitting, data to all mobile terminals within the same cell (identified by a carrier) are
sent using a different spreading sequence for each user. Regarding the MS, in transmission
the communication occurs asynchronously, with the requirement that the power level of the
received signals must be as uniform as possible.
Whatever the bit rate used by the voice encoder, the channel encoder brings the bit rate
to 19.2 kbit/s. The spreading procedure used for the BS-MS transmission is illustrated in
Figure 17.31.
Note that, after channel coding, interleaving is performed. Then the bit rate is increased
to 1.2288 Mbit/s using one of 64 Walsh codes; to each user in a cell is assigned a different
code, in order to maintain separation among signals of different users in the same cell and
eliminate interference, at least in the absence of multipath. Moreover, to reduce the level of
interference between two users that are in different cells, and to which the same Walsh code
is assigned, scrambling is used; without going into details (see Chapter 10), the function of
scrambling is to achieve quasi-orthogonality between the signals of the two users, also for
different lags, so that the interference is kept at a low level [20]. Before the two modulation
filters, a short pilot PN sequence is also inserted, that is used at the receiver for various
purposes, e.g., channel identification.
The system for the MS-BS communication has a different structure: channel coding is
performed by a convolutional encoder with code rate 1/3, that yields a bit rate of 28.8 kbit/s.
Spreading is still done by selecting one of 64 Walsh codes, but in a different way as
compared to the BS-MS transmission [20].
QPSK is adopted; in particular, the modulation used by the mobile station is OQPSK.
For detection, a RAKE receiver is used.
We conclude this section by mentioning the third generation (3G) mobile radio sys-
tem denoted universal mobile telecommunications system (UMTS), in the process of being
standardized. 3G systems are intended for the transmission at rates from 16 kbit/s up to
Figure 17.31. Spreading procedure used by the IS-95 system for transmission from the base
station to the mobile station.
1182 Chapter 17. Wired and wireless network technologies
2 Mbit/s. Various approaches have been identified for the definition of the link layer; the
most important is the wideband CDMA (WCDMA) system [27].
7 For a discussion of the differences between a cellular system and a cordless system, see Section 17.A.1.
8 For DECT, the acronyms portable handset (PH) in place of MS and radio fixed part (RFP) in place of BS,
are often used.
17.A. Standards for wireless systems 1183
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
1.728 MHz
3. MS sends a message, called access request, over the least interfered channel.
4. BS sends (or not) an answer: access granted .
5. If the MS receives this message, in turn it transmits the access confirm message and
the communication starts.
6. If the MS does not receive the access granted signal on the selected channel, it
abandons this channel and selects the second least interfered channel, repeating the
procedure; after failing on 5 channels, the MS selects another BS and repeats all
operations.
The total band allocated to the system goes from 1880 to 1900 MHz and is subdivided
into ten sub-bands, each with a width of 1.728 MHz; this is the FDMA structure of DECT
represented in Figure 17.32. Each channel has an overall bit rate of 1.152 Mbit/s, that
corresponds to a bit period of about 868 ns.
Similar to the other systems, TDMA is used. The TDM frame, given in Figure 17.33,
is composed of 24 slots: the first 12 are used for the communication BS-MS, the other
1184 Chapter 17. Wired and wireless network technologies
Figure 17.33. TDM frame structure and slot structure for the DECT system.
Table 17.7 Summary of characteristics of the standards GSM, IS-136, JDC, IS-95,
and DECT.
System GSM IS-136 JDC IS-95 DECT
Multiple TDMA/ TDMA/ TDMA/ FDMA/ TDMA/
access FDMA FDMA FDMA CDMA FDMA
Band (MHz)
Forward 935ł960 869ł894 940ł960 935ł960 1880ł1900
(BS!MS) 1805ł1880 1930ł1945 1477ł1501
Reverse 890ł915 824ł849 810ł830 824ł849 1880ł1900
(MS!BS) 1710ł1785 1850ł1865 1429ł1453
Duplexing FDD FDD FDD FDD TDD
Spacing 200 kHz 30 kHz 25 kHZ 1250 kHz 1728 kHz
between carriers
Modulation GMSK 0.3 ³=4-DQPSK ³=4-DQPSK QPSK GMSK 0.5
Bit rate per 270.833 48.6 42 1228.8 1152
carrier (kbit/s)
Speech encoder RPE-LTP VSELP VSELP QCELP ADPCM
bit rate (kbit/s) 13 7.95 6.7 1.2ł9.6 32
Convolutional
encoderŁ 1/2 1/2 1/2 (F) none
code rate 1/3 (R)
Frame 4.615 40 20 20 10
duration (ms)
Ł All these standards use a CRC code, possibly together with a convolutional code.
The maximum bit rate is of 723:2 kbit/s in one direction and 57:6 kbit/s in the reverse
direction.
HIPERLAN type 1. The allocated band for the system is from 5.15 to 5.30 GHz and from
17.1 to 17.3 GHz. The first bandwidth of 150 MHz is still the most used and is further
divided into 5 sub-bands; the first carrier is at 5.176468 GHz, with a channel bandwidth of
23.5294 MHz. Figure 17.35 shows the band allocation.
1186 Chapter 17. Wired and wireless network technologies
1 2 3 4 5
23.5294 MHz
5176.47 MHz
150 MHz
The maximum overall (raw) bit rate is of 23.5294 Mbit/s with a corresponding bit interval
of about 42.5 ns. The system is also able to work at lower bit rates (the so-called low rate
of 1.4706 Mbit/s), in order to reduce power consumption. The transmission takes place by
data packets: m blocks of 496 bits, with m that can vary from 1 to 47, are transmitted.
Before the transmission of these blocks, there is the transmission of a training sequence
of 450 bits, which is needed for channel identification and synchronization. Figure 17.36
shows the block diagram of the transmitter.
17.A. Standards for wireless systems 1187
Figure 17.36. Block diagram of the transmitter for the HIPERLAN type 1 system.
The source generates a sequence of m blocks, each one of 416 bits; at the output of the
channel encoder the block length is of 496 bits. After interleaving, differential precoding is
performed and successively a training sequence is inserted; the generated symbols are then
input to the modulator. For HIPERLAN type 1, GMSK with parameter Bt T equal to 0.3
is adopted.9
At the receiver, adaptive equalization is performed before decoding [30].
Chapter 18
Modulation techniques
for wireless systems
In this chapter, after an overview of front-end architectures for mobile radio receivers, we
discuss modulation and demodulation schemes that are well suited for application to mobile
radio systems because of their simplicity and robustness against disturbances introduced
by the transmission channel. Appendix 17.A describes some of the standards where these
schemes are adopted.
Alternative architectures
In this section three receiver architectures are considered that attempt to integrate the largest
number of receiver elements. To reach this objective, the A/D conversion of the scheme
of Figure 18.1 must be shifted from baseband to IF or RF. In this case implementation of
the channel selection by an analog filter bank is not efficient; indeed, it is more convenient
to use a wideband analog filter followed by a digital filter bank. Reference is also made
to multi-standard software-defined radio (SDR) receivers, or to the possibility of receiving
signals that have different bandwidths and carriers, that are defined according to different
standard systems.
We can identify two approaches to the A/D conversion.
1. Full bandwidth digital conversion: by using a very high sampling frequency, the
whole bandwidth of the SDR system is available in the digital domain, i.e. all desired
channels are considered. Considering that this bandwidth can easily achieve 100 MHz,
and taking into account the characteristics of interference signals, the dynamic of the
ADC should exceed 100 dB. This solution, even though it is the most elegant, cannot
easily be implemented as it presents a high complexity and high power consumption.
2. Partial bandwidth digital conversion: this approach uses a sampling frequency that
is determined by the radio channel that presents the most extended bandwidth within
the different systems we want to implement.
The second approach will be considered, as it leads to an SDR system architecture that can
be implemented with moderate complexity.
towards the mixer input as well as towards the antenna, with consequent radiation. The
interference signal generated by the oscillator could be reflected on surrounding objects and
be “re-received”: therefore this spurious signal would produce a time variant DC offset [1]
at the mixer output.
To understand the origin and consequences of this offset, we can make the following
two observations.
1. The isolation between the oscillator input and the inputs of the mixer and linear
amplifier is not infinite; in fact an LO leakage is determined by both capacitive
coupling and device substrate. The spurious signal appears at the linear amplifier and
mixer inputs and is then mixed with the signal generated by the oscillator, creating
a DC component; this phenomenon is called self-mixing. A similar effect is present
when there is a strong signal coming from the linear amplifier or from the mixer that
couples with the LO input: this signal would then be multiplied by itself.
2. In Figure 18.2 to amplify the input signal, that is of the order of microvolts, to a
level such that it can be digitized by a low cost and low power ADC, the total gain,
from the antenna to the LPF output, is about 100 dB. Of this gain, 30 dB are usually
provided by the linear amplifier/mixer combination. With these data we can make
a first computation of the offset due to self-mixing. We assume that the oscillator
generates a signal with a peak-to-peak value of 0.63 V and undergoes an attenuation
of 60 dB when it couples with the LNA input. If the gain of the linear amplifier/mixer
combination is 30 dB, then the offset produced at the mixer output is of the order of
10 mV; if directly amplified with a gain of 70 dB, the voltage offset would saturate
the circuits that follow.
The problem is even worse if self-mixing is time variant. This event, as previously men-
tioned, occurs if the oscillator signal leaks to the antenna, thus being irradiated and reflected
back to the receiver from moving objects.
Finally, the direct conversion receiver needs a tuner for the channel frequency selection
that works at high frequency and with low phase noise; this is hardly obtainable with an
integrated VCO with low Q.
conversion, it utilizes a single mixer stage; the principal difference is that the frequency
shift is not made to DC but to a small intermediate frequency. The main advantage of
the low-IF system is that the desired channel has no DC components; therefore, the usual
problems that emerge from DC offset present in the direct conversion are avoided.
X
C1
s.t/ D Re[e j k h T x .t kT /e j2³ f 0 t ] (18.1)
kD1
where k is the phase associated with the transmitted symbol at instant kT given by the
recursive equation (6.157).
18.2. Three non-coherent receivers for phase modulation systems 1193
Figure 18.4. Non-coherent baseband differential receiver. Thresholds are set at .2n 1/³=M,
n D 1; 2; : : : ; M.
At the receiver the signal r is a version of s, filtered by the transmission channel and
corrupted by additive noise. We denote as g A the cascade of passband filters used to amplify
the desired signal and partially remove noise. As shown in Figure 18.4, let x be the passband
received signal, centered around the frequency f 0 , equal to the carrier of the transmitted
signal
x.t/ D Re[x .bb/ .t/e j2³ f 0 t ] (18.2)
where x .bb/ is the complex envelope of x with respect to f 0 . Using the polar notation
x .bb/ .t/ D Mx .t/e j1'x .t/ , (18.2) can be written as
x.t/ D Mx .t/ cos.2³ f 0 t C 1'x .t// (18.3)
where 1'x .t/ is the instantaneous phase deviation of x with respect to the carrier phase
(see Chapter 1, equation (1.207)).
In the ideal case of absence of distortion and noise, sampling at suitable instants yields
1'x .t0 C kT / D k . Then for the recovery of the phase k we can use the receiver scheme
of Figure 18.4, which, based on signal x, determines the baseband component y as
y.t/ D y I .t/ C j y Q .t/ D 12 x .bb/ Ł g Rc .t/ (18.4)
The phase variation of the sampled signal yk D y.t0 C kT / between two consecutive
symbol instants is obtained by means of the signal
Ł
z k D yk yk1 (18.5)
Always in the ideal case and assuming that g Rc does not distort the phase of x .bb/ , z k turns
out to be proportional to e jk . The simplest data detector is the threshold detector based on
1194 Chapter 18. Modulation techniques for wireless systems
the value of
Note that a possible phase offset 1'0 and a frequency offset 1 f 0 , introduced by the
receive mixer, yields a signal y given by
Assuming that g Rc does not distort the phase of x .bb/ , the signal vk becomes
which shows that the phase offset 1'0 does not influence vk , while a frequency offset must
be compensated by the data detector, summing the constant phase 2³ 1 f 0 T .
The baseband equivalent scheme of the baseband differential receiver is given in
Figure 18.5.
The choice of h T x , g .bb/
A , and g Rc is governed by the same considerations as in the
case of a QAM system; for an ideal channel, the convolution of these elements must be a
Nyquist pulse.
I : x.t/x.t T /
D Mx .t/ cos[2³ f 0 t C 1'x .t/]Mx .t T / cos[2³ f 0 .t T / C 1'x .t T /]
(18.9)
Q : x .h/ .t/x.t T /
D Mx .t/ sin[2³ f 0 t C 1'x .t/]Mx .t T / cos[2³ f 0 .t T / C 1'x .t T /]
18.2. Three non-coherent receivers for phase modulation systems 1195
The filter g Rc removes the components around 2 f 0 ; the sampled filter outputs are then
given by
I : z k;I D Mx .t0 C kT /Mx .t0 C .k 1/T /
1
2 cos[2³ f 0 T C 1'x .t0 C kT / 1'x .t0 C .k 1/T /]
(18.10)
Q : z k;Q D Mx .t0 C kT /Mx .t0 C .k 1/T /
1
2 sin[2³ f 0 T C 1'x .t0 C kT / 1'x .t0 C .k 1/T /]
If f 0 T D n, n an integer, or by removing this phase offset by phase shifting x or z k , it
results
z k;Q
vk D tan1 D 1'x .t0 C kT / 1'x .t0 C .k 1/T / (18.11)
z k;I
as in (18.6).
The baseband equivalent scheme is shown in Figure 18.7, where, assuming that g Rc does
not distort the desired signal,
For a simple DBPSK, with k 2 f0; ³ g, we only consider the I branch, and vk D z k;I is
compared with a threshold set to 0.
Performance of M-DPSK
With reference to the transmitted signal (18.1), we consider the isolated pulse k
where
² ¦
2³ 2³.M 1/
k D k1 C k k 2 0; ;:::; (18.14)
M M
In general, referring to the scheme of Figure 18.7, the filtered signal is given by
.bb/
x .bb/ .t/ D u .bb/ .t/ C w R .t/ (18.15)
where u .bb/ is the desired signal at the demodulator input (isolated pulse k)
.bb/ 1 .bb/
u .bb/ .t/ D e j k h T x Ł 12 gCh Ł 2 g A .t kT / (18.16)
.bb/
as .1=2/gCh is the complex envelope of the impulse response of the transmission channel,
.bb/
and w R .t/ is zero mean additive complex Gaussian noise with variance ¦ 2 D 2N0 Brn ,
where
Z C1 Z C1
Brn D jG A . f /j d f D
2
jg A .t/j2 dt
1 1
Z Z C1 þ (18.17)
C1 1 þþ .bb/ þþ2 1 þ .bb/ þþ2
D þG A . f /þ d f D þg A .t/þ dt
1 4 1 4
wk D w.bb/
R .t0 C kT / (18.21)
18.2. Three non-coherent receivers for phase modulation systems 1197
and
r r
2E s 1 2 p
AD T D Es (18.22)
T 2 T
then
with
The desired pterm is similar to that obtained in the coherent case, M-phases on a circle
of radius A D E s . The variance of wk is equal to N0 , and ¦ I2 D N0 =2. However, even
ignoring the term wk wk1
Ł =A, if w and w
k k1 are statistically independent it results in
There is an asymptotic penalty, that is for E s =N0 ! 1, of 3 dB with respect to the coherent
receiver case. Indeed, for a 4-DPSK, a more accurate analysis demonstrates that the penalty
is only 2.3 dB for higher values of E s =N0 (see Section 6.5.1).
t 0 +kT
r (bb)(t) 1 (bb) x (bb)(t) d ∆ f(t) modulo vk
g I m[.] I&D
2 A dt 2π
1
2π |.|2
where k is the phase associated with the transmitted symbol at instant kT , and '0 is the
carrier phase.
We denote as ak the generic symbol at instant kT
ak D e jk (18.31)
As k 2 fš³=4; š3³=4g, we get
1
ak 2 p fš1; š jg (18.32)
2
For a modulation interval T D 2Tb , the map between bits and symbols is reproduced in
Figure 18.11, with balanced symbols, b` 2 f1; 1g. In particular, the following bit map is
adopted,
Re [ak ] D b2k1 2 f1; 1g odd bits (18.33)
Im [ak ] D b2k 2 f1; 1g even bits (18.34)
Let
X
C1
I .t/ D b2k1 h T x .t kT / (18.35)
kD1
X
C1
Q.t/ D b2k h T x .t kT / (18.36)
kD1
then s is given by
s.t/ D I .t/ cos.2³ f 0 t C '0 / Q.t/ sin.2³ f 0 t C '0 /
" #
X
C1 (18.37)
j .2³ f 0 tC'0 /
D Re ak h T x .t kT /e
kD1
-1 1 1 1
ak
I
b2k
-1 -1 b 2k-1 1 -1
(b 2k-1 b2k )
Figure 18.11. QPSK constellation with corresponding bit map and possible transitions of
phase k at instants kT.
1200 Chapter 18. Modulation techniques for wireless systems
ϕ +θ 0 −π −π/2 0
0 k
b−1=1 b1=−1 b3=1 b5=1
b =1 b =−1 b =−1 b =1
0 2 4 6
A
s(t)
−A
0 T 2T 3T 4T
t
Figure 18.12. Realization of a QPSK signal with f0 D 2=T and '0 D ³=4.
From Figure 18.11 we note that at successive instants, the phase fk g can undergo
variations even equal to ³ ; this implies large discontinuities in s with a consequent very
high peak-power/average-power ratio of s if h T x is narrow band. In radio systems this can
create saturation problems of the transmit amplifier (see Section 4.8). Figure 18.12 shows
the possible behavior of s for a wideband modulation pulse
t T =2
h T x .t/ D A rect (18.38)
T
for which
X
C1
1
I .t/ D b2k1 h T x t k T (18.40)
kD1
2
X
C1
Q.t/ D b2k h T x .t kT / (18.41)
kD1
18.3. Variants of QPSK 1201
-1 1 1 1
al
I
b2k
-1 -1 b 2k-1 1 -1
(b 2k-1 b2k )
Figure 18.13. OQPSK constellation with corresponding bit map and possible transitions of
phase ` at instants `Tb .
In other words, the signal phase is varied alternatively, over the two branches, every Tb
seconds; therefore the phase variation is now at most ³=2, as illustrated in Figure 18.13.
For h T x as given by (18.38), we now have
For the same binary sequence of Figure 18.12, we show in Figure 18.14 the behavior of s
given by (18.43). The reduced phase variation in s implies a greater compatibility with non-
linear amplifiers. On the other hand, an OQPSK system requires a coherent demodulator.
ak D e j k (18.44)
where
² ¦
³ 3³
k D k1 C k k 2 0; ; ³; (18.45)
2 2
The information is mapped in the phase variation between two successive symbols. At the
receiver a non-coherent differential demodulator (see Section 18.2) is commonly employed.
If we denote as e jk the information at instant kT , for the transmitted symbol ak the
recursive relation ak D ak1 e jk holds, from which we again obtain e jk by the equation
e jk D ak ak1
Ł .
1202 Chapter 18. Modulation techniques for wireless systems
−A
Figure 18.14. Realization of an OQPSK signal with f0 D 2=T and '0 D ³=4.
π/4-DQPSK
The transmitted symbol is ak D e j k , with
k D k1 C k (18.46)
where now k 2 f³=4; 3³=4; 5³=4; 7³=4g. Possible values of phase k are given in
Figure 18.15.
Note that for k even, k assumes values in f0; ³=2; ³; 3³=2g, and for k odd k 2
f³=4; 3³ =4; 5³=4; 7³=4g. Phase variations between two consecutive instants are š³=4
and š3³=4, with a good peak/average power ratio of the modulated signal.
18.3. Variants of QPSK 1203
18.3.2 Implementations
QPSK, OQPSK, and DQPSK modulators
Extending the scheme of Figure 6.34, we obtain the general modulator scheme for QPSK,
OQPSK, and DQPSK illustrated in Figure 18.16. To implement each of these modulators
it is sufficient to change the encoder and possibly add a delay of Tb on the Q branch for
OQPSK [2]. Starting from the scheme of Figure 6.35, a coherent demodulator for QPSK
and OQPSK is shown in Figure 18.17. In this scheme the two functions of carrier recovery
(CR) and symbol timing recovery (STR) have been added, that determine the sampling
instants t0 C k2Tb (+ possible Tb ) for data detection. For OQPSK, on the I branch the delay
Tb matches the delay introduced on the Q branch of the modulator.
Finally, with reference to the demodulator for DQPSK, the 1BDD non-coherent scheme
of Figure 18.6 is usually employed.
π/4-DQPSK modulators
Due to its wide use, we examine in detail modulation and demodulation schemes for ³=4-
DQPSK. The modulator is similar to that for QPSK of Figure 18.16, differing only in the
bit map. Now, for
ak D ak;I C jak;Q (18.47)
from (18.46) a recursive equation for ak;I and ak;Q is given by
ak;I D ak1;I cos k ak1;Q sin k (18.48)
Figure 18.17. Coherent demodulator for QPSK and OQPSK (CR = carrier recovery, STR =
symbol timing recovery).
Figure 18.18. Eye diagram of the 1BDD for ³=4-DQPSK, for an ideal channel and square
root raised cosine receive filter with ² D 0:3.
Figure 18.19. Eye diagram of the 1BDD for ³=4-DQPSK, for an ideal channel and square
root raised cosine receive filter with ² D 1.
1206 Chapter 18. Modulation techniques for wireless systems
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
t/T
Figure 18.20. Eye diagram of the LDI demodulator for ³=4-DQPSK, for an ideal channel and
square root raised cosine receive filter with ² D 0:3.
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
t/T
Figure 18.21. Eye diagram of the LDI demodulator for ³=4-DQPSK, for an ideal channel and
square root raised cosine receive filter with ² D 1.
18.4. Frequency shift keying (FSK) 1207
As already expressed in (6.72), a binary FSK modulator maps the information bits in
frequency deviations (š f d ) around a carrier with frequency f 0 ; the possible transmitted
waveforms are then given by
X
C1
s.t/ D sak wT .t kT / ak 2 f1; 2g (18.53)
kD1
Acos(2 π f2 t+ϕ 2 )
~
ak
s(t)
~
Acos(2 π f1 t+ϕ 1 )
Figure 18.23. Binary FSK waveforms and transmitted signal for a particular sequence fak g.
where ak 2 f1; 1g represents the information symbol. The complex envelope of s, with
respect to the carrier f 0 , is given by
A j .2³ f d t'1 / X
C1
s .bb/ .t/ D [e C e j .2³ f d tC'2 / ] wT .t kT /
2 kD1
(18.55)
A X
C1
C [e j .2³ f d t'1 / e j .2³ f d tC'2 / ] ak wT .t kT /
2 kD1
We note that
X
C1
wT .t kT / D 1 (18.56)
kD1
while the second term of (18.55) is a PAM signal; using the results of Example 7.1.1 on
page 544 it is easy to derive the continuous and the discrete parts of the power spectrum
of s .bb/ .
18.4. Frequency shift keying (FSK) 1209
.c/ A2
PN s .bb/ . f / D [jWT . f C f d /j2 C jWT . f f d /j2
4T
2 .h/Re[e j .'2 '1 / WT . f C f d /WŁT . f f d /]] (18.57)
A2
PN s.d/
.bb/ . f / D [Ž. f C f d / C Ž. f f d /]
4
In (18.57), h D 2 f d T is the modulation index, and
²
1 h an integer
.h/ D (18.58)
0 otherwise
It can be seen that the power spectrum (18.57) contains different spurious components [3]
and this is partially due to the fact that '1 6D '2 .
Particular case. Applying the previous expressions to the binary case with '1 D '2 D 0,
we have1
s1.bb/ .t/ D Ae j2³ f d t wT .t/ F S1.bb/ . f / D AWT . f C f d /
! (18.59)
s2.bb/ .t/ D Ae j2³ f d t wT .t/ S2.bb/ . f / D AWT . f f d /
From (18.57), the continuous part becomes
A2
PN s.c/
.bb/ . f / D [jWT . f C f d /j2 C jWT . f f d /j2 2 .h/ Re[WT . f C f d /WŁT . f f d /]]
4T
(18.60)
The discrete part is instead given by
A2
PN s.d/
.bb/ . f / D [jWT . f C f d /j2 C jWT . f f d /j2
4T 2
X
C1 (18.61)
C 2Re[WT . f C f d /WŁT . f f d /]] Ž. f `=T /
`D1
The behavior of 1
A2 T
PN s.c/
.bb/ . f /
is illustrated in Figure 18.24 for three values of the modu-
lation index; the amplitude in dB of the spectral lines of 12 PN .d/
.bb/ . f / is listed in Table 18.1.
A s
1 In any case, unless specific assumptions are made on f 1 and f 2 , the phase of the transmitted signal can
undergo discontinuities at the instants kT .
1210 Chapter 18. Modulation techniques for wireless systems
Figure 18.24. Continuous part of the power spectral density of a non-coherent binary FSK
for three values of the modulation index.
envelope fsi.bb/ .t/g. As will be derived next, the continuous and discrete parts of the power
spectrum are given by
8 þ þ2 9
1 <X M
1 þXM þ =
þ þ
PN s.c/
.bb/ . f / D jSi.bb/ . f /j2 þ Si.bb/ . f /þ (18.62)
:
M T i D1 þ
M i D1 þ ;
18.4. Frequency shift keying (FSK) 1211
2 þþX M
þ2
þ XC1
1 þ þ `
PN s.d/
.bb/ . f / D þ
.bb/
S . f /þ Ž f (18.63)
MT þ i D1 i þ `D1 T
We derive now the expressions (18.62) and (18.63). s .bb/ is a cyclostationary process
with autocorrelation
X
1 X
1
D E[sa.bb/
k
.t k1 T /sa.bb/Ł
k
.t − k2 T /]
1 2
k1 D1 k2 D1
X
1 X
1
D E[sa.bb/
k
.t k1 T /sa.bb/Ł
k
.t − k2 T /] (18.64)
1 2
k1 D1 k D1
2
k1 6Dk2
X
1
C E[sa.bb/
k
.t k1 T /sa.bb/Ł
k
.t − k1 T /]
1 1
k1 D1
X
C1
1 X M X M
D r .bb/ .bb/ .− C mT / (18.66)
mD1 T M 2 i D1 i D1 si1 si2
1 2
1 X M X M
1 X M
r .bb/ .bb/ .− / C r .bb/ .bb/ .− /
T M 2 i D1 i D1 si1 si2 T M i D1 si si
1 2
X
C1 F 1 XC1
` `
x.− C mT / D repT x.− /
! X Ž f (18.67)
mD1 T `D1 T T
Coherent demodulator
A coherent demodulator is used when the transmitter provides all M signals with a well
defined phase (see Example 6.7.1 on page 486); at the receiver there must be a circuit for
the recovery of the phase of the various carriers. In practice this coherent receiver is rarely
used because of its implementation complexity.
In any case, for orthogonal signals we can adopt the scheme of Figure 6.8, repeated in
Figure 18.25 for the binary case.
For this case ² D 0, and the bit error probability has already been derived in (6.71),
s !
Es
Pbit D Q (18.68)
N0
Non-coherent demodulator
The transmitted waveforms can now have a different phase, and in any case unknown to
the receiver (see Example 6.11.1 on page 509). For orthogonal signals, from the general
scheme of Figure 6.59, we give in Figure 18.26 a possible non-coherent demodulator. In
this case, for ² D 0, the bit error probability has already been derived in (6.341),
Es
1 2N
Pbit D e 0 (18.70)
2
1
.2 f d /min D (18.71)
T
Therefore a non-coherent FSK system needs a double frequency deviation and has slightly
lower performance (see Figure 6.62) as compared to coherent FSK, however, it does not
need to acquire the carrier phase.
Limiter-discriminator FM demodulator
A simple non-coherent receiver, with good performance, for f 1 T × 1, f 2 T × 1 and
sufficiently high E s =N0 , is depicted in Figure 18.27.
After a passband filter to partially eliminate noise, a classical FM demodulator is used
to extract the instantaneous frequency deviation of r.t/, equal to š f d . Sampling at instants
t0 C kT and using a threshold detector, we get the detected symbol aO k 2 f1; 1g.
In general, for 2 f d T D 1, the performance of this scheme is very similar to that of a
non-coherent orthogonal FSK.
0.8
0.6
0.4
0.2
s1(t), s2 (t)
−0.2
−0.4
−0.6
−0.8
−1
0 T/2 T
t
Figure 18.28. Waveforms as given by (18.52) with A D 1, f1 D 2=T, f2 D 3=T, and '1 D '2 D 0.
18.5. Minimum shift keying (MSK) 1215
is to employ a single oscillator, whose phase satisfies the constraint '..k C 1/T ./ / D
'..k C 1/T .C/ /; thus it is sufficient that at the beginning of each symbol interval kC1 is
set equal to
kC1 D ..ak akC1 /2³ f d .k C 1/T C k / mod 2³ (18.74)
An alternative method is given by the scheme of Figure 18.29, in which the sequence
fak g, with binary elements in f1; 1g, is filtered by g to produce a PAM signal
X
C1
x f .t/ D ak g.t kT / ak 2 f1; 1g (18.75)
kD1
Figure 18.31. Continuous part of the power spectral density of a CPFSK signal for five values
of the modulation index.
1218 Chapter 18. Modulation techniques for wireless systems
q(t)
0.5
Tb 2Tb 3Tb t
Let
³ X k1
k1 D ai (18.82)
2 i D1
then it follows
³ t
1'.t/ D ak k C k1 kTb t < .k C 1/Tb (18.83)
2 Tb
For t D .k C 1/Tb./ we get
³
1'..k C 1/Tb./ / D ak C k1 D k (18.84)
2
Assuming 1 D 0, the distinct values assumed by k mod 2³ are only four:
8 ³
< ; ³ for k even
k D 2 2 (18.85)
:
0; ³ for k odd
From (18.84) we note how the information ak is also contained in the variation, in two
successive instants, of the phase deviation 1'. The possible trajectories of 1' are shown
in Figure 18.33.
From (18.81), the complex envelope of the modulated signal is given by
s .bb/ .t/ D Ae j1'.t/ (18.86)
where 1'.t/ is related to the message by (18.83). Correspondingly the modulated signal
(18.76) becomes
s.t/ D I .t/ cos.2³ f 0 t/ Q.t/ sin.2³ f 0 t/ (18.87)
where
I .t/ D Re[s .bb/ .t/] Q.t/ D Im[s .bb/ .t/] (18.88)
The signal
s .bb/ .t/ D I .t/ C j Q.t/ (18.89)
is represented in Figure 18.34. We note that the phase 1'.t/ continuously varies in time
and assumes the values f0; ³=2; ³; 3³=2g at instants kTb .
18.5. Minimum shift keying (MSK) 1219
∆ϕ (t)
2π
3π/2
π/2
0
t
0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb
a 0=1 a1 =-1 a 2=1 a 3=1 a 4=1 a5 =-1
from which we can observe that an MSK signal is a binary FSK signal with frequencies
f 1 D f 0 1=.4T / and f 2 D f 0 C 1=.4T /, where f 0 is the carrier frequency.
In (18.91) the symbols of the sequence fak g, ak 2 f1; 1g, are information symbols, and
1
k D .ak1 ak /2³ kTb C k1 mod 2³
4Tb
(18.92)
³
D .ak1 ak / k C k1 mod 2³
2
In particular we note that
8²
>
> k1 ak D ak1
>
< k1 š 2³ for k even
ak 6D ak1
k D ² (18.93)
>
> k1 ak D ak1
>
: for k odd
k1 š ³ ak 6D ak1
hence k 2 f0; ³ g.
From the comparison of (18.90) with (18.83) it is easy to derive the relation between k
and k ,
³
k D k1 kak (18.94)
2
MSK as OQPSK
As ak 2 f1; 1g, it is easy to verify that
³t ³t ³t ³t
sin k D 0 cos ak D cos sin ak D ak sin (18.95)
2Tb 2Tb 2Tb 2Tb
Moreover, from (18.91) we obtain
X
³t
I .t/ D AwTb .t kTb / cos ak C k
k
2Tb
X
³t
DA wTb .t kTb / cos k Ð cos C0
k
2Tb
(18.96)
X ³t
Q.t/ D A wTb .t kTb / sin ak C k
k
2Tb
X
³t
DA wTb .t kTb / ak cos k Ð sin C0
k
2Tb
As the signal s is continuous phase by construction, from (18.87) I and Q must be con-
tinuous functions; therefore
³t
1. cos k can change only for k odd, that is at instants .k 2Tb C Tb / in which cos 2Tb
vanishes;
18.5. Minimum shift keying (MSK) 1221
³t
2. ak cos k can change only for k even, that is at instants .k 2Tb / in which sin 2Tb
vanishes.
Therefore, we note that the information symbols associated with I and Q components can
change only every 2Tb and there is a lag Tb between the two branches. Defining the variable
(
cos k for k odd
ck D (18.97)
ak cos k for k even
Indeed, the transformation that maps the bits of fak g into fck g corresponds to a differential
encoder given by2
ak D ck ck1 (18.101)
0.5
cos(π t/2Tb )
−0.5
−1
−T 0 Tb 2T 3T 4T 5T b
b b b b
t
0.5
sin(π t/2Tb )
−0.5
−1
−T b 0 Tb 2T 3T b 4T b 5T
b b
t
Then it follows that an MSK scheme is an OQPSK scheme with modulation interval
T D 2Tb and pulse
t Tb ³t
h T x .t/ D A rect sin (18.104)
2Tb 2Tb
given in Figure 18.36, with Fourier transform
4Tb cos.2³ f Tb / j2³ f Tb
HT x . f / D A e (18.105)
³.1 .4 f Tb /2 /
We note that the transmitted symbols associated with the OQPSK interpretation are encoded
with a suitable sign.
The example in Table 18.2 shows how the sequence fak g is mapped into fck g and to the
data on the I and Q branches. The modulated signals are shown in Figure 18.37; note the
phase continuity of s.
X
C1
s .bb/ .t/ D j kC1 ck h T x .t kTb / (18.106)
kD1
ak 1 1 1 1 1 1 1 1
ck 1 1 1 1 1 1 1 1
I data 1 1 1 1
Q data 1 1 1 1
X
C1
s .bb/ .t/ D ej k h T x .t kTb / (18.108)
kD1
1 1
0.5 0.5
−Q(t)
I(t)
0 0
−0.5 −0.5
−1 −1
0 2 4 6 0 2 4 6
t/T t/T
1 1
−Q(t)sin(2πf0 t)
I(t)cos(2πf t)
0.5 0.5
0
0 0
−0.5 −0.5
−1 −1
0 2 4 6 0 2 4 6
t/T t/T
0.5
s(t)
−0.5
−1
0 2 4 6
t/T
Figure 18.37. MSK signal for the data sequence of Table 18.2. Using the modulated signals
I and Q, s is formed by (18.87).
A
cos[∆ϕ(t)]
cos(2π f0 t)
ak xf (t) ∆ϕ(t) ~ +
1 t s(t)
w
2Tb Tb π
Tb -
8
π/2 +
A -sin(2π f0 t)
sin[∆ϕ(t)]
serial differential
demodulator demodulator
data
t 0+k2Tb detector
1
g Rc
−1
cos2 π f 0 t Tb delay
r(t) P/S a^ k
CR STR +
decoder
π /2
−sin2 πf 0 t
1
g Rc
−1
t 0 +k2Tb +Tb
of Tb (see (18.85)). In any case, the performance for an AWGN channel is that of a DBPSK
scheme, but with half the phase variation; hence from (6.163), with E s that becomes E s =2,
we get
Es
1 2N
Pbit D e 0 (18.111)
2
Note that this is also the performance of a non-coherent orthogonal binary FSK scheme
(see (18.70)).
In Figure 18.42 we illustrate a coherent (OQPSK type) demodulator that at alternate
instants on the I branch and on the Q branch is of the BPSK type. In this case, from
18.5. Minimum shift keying (MSK) 1227
Table 18.3 Increment of 0 (in dB) for an MSK scheme with respect to
a coherent BPSK demodulator for Pbit D 103 .
(6.151), the error probability for decisions on the symbols fck g is given by
s !
2E s
Pbit;Ch D Q (18.112)
N0
As it is as if the bits fck g were differentially encoded, to obtain the bit error probability
for decisions on the symbols fak g, we use (6.173):
2 3 4
Pbit D 4Pbit;Ch 8Pbit;Ch C 8Pbit;Ch 4Pbit;Ch (18.113)
In Figure 18.43 we show error probability curves for various receiver types. From the
graph we note that to obtain an error probability of 103 , going from a coherent system
to a non-coherent one, it is necessary to increase the value of 0 D E s =N0 as indicated in
Table 18.3.
aQ k D bk ý bk1 and ak D 1 2aQ k D 1 2.bk ý bk1 /, then from (18.100) with cQk 2 f0; 1g
and ck D 1 2cQk , we get
cQk D cQk1 ý aQ k
D aQ 0 ý aQ 1 ý Ð Ð Ð ý aQ k (18.114)
D b1 ý bk
In other words, the symbol ck is directly related to the information bit bk ; the performance
loss due to (18.113) is thus avoided and we obtain
Figure 18.44. Normalized power spectral density of the complex envelope of signals obtained
by four modulation schemes.
18.6. Gaussian MSK (GMSK) 1229
Modulating both BPSK and QPSK signals with h T x given by a retangular pulse with
duration equal to the symbol period, a comparison between the various power spectra is
illustrated in Figure 18.44. We note that for limited bandwidth channels, it is convenient
to choose h T x of the raised cosine or square root raised cosine type. However, in radio
applications the choice of a rectangular pulse may be appropriate, as it generates a signal
with a lower peak/average power ratio and therefore is more suitable to be amplified with
a power amplifier that operates near saturation.
Two observations on Figure 18.44 follow.
ž For the same Tb the main lobe of QPSK extends up to 1=T D 0:5=Tb , whereas that
of MSK extends up to 1=T D 0:75=Tb ; thus the lobe of MSK is 50% wider than that
of QPSK, consequently requiring a larger bandwidth.
ž interpolator filter
1
g I .t/ D wT .t/ (18.117)
2T
ž shaping filter
K 2³ Bt
gG .t/ D p eK t =2
2 2
with K Dp .Bt is the 3 dB bandwidth)
2³ ln.2/
(18.118)
ak xf (t)
g
ž overall filter
ak 2 f1; 1g (18.120)
ž interpolated signal
X
C1 X
C1
1
x I .t/ D ak g I .t kT / D ak wT .t kT / (18.121)
kD1 kD1
2T
ž PAM signal
X
C1
x f .t/ D ak g.t kT / (18.122)
kD1
ž modulated signal
Z t
s.t/ D A cos 2³ f 0 t C 2³ h x f .− / d− D A cos.2³ f 0 t C 1'.t// (18.123)
1
p
where h is the modulation index, nominally equal to 0.5, and A D 2E s =T .
From the above expressions it is clearRthat the GMSK signal is a frequency modulated
t
signal with phase deviation 1'.t/ D ³ 1 x f .− / d− .
An important parameter is the 3 dB bandwidth, Bt , of the Gaussian filter. However, a
reduction in Bt , useful in making prefiltering more selective, corresponds to a broadening of
the PAM pulse with a consequent increase in the intersymbol interference, as can be noted
in the plots of Figure 18.46. Thus a trade-off between the two requirements is necessary.
The product Bt T was chosen equal to 0.3 in the GSM and HIPERLAN standards, and
equal to 0.5 in the DECT standard (see Appendix 17.A). The case Bt T D 1, i.e. without
the Gaussian filter, corresponds to MSK.
Analyzing g in the frequency domain we have
F
g.t/ D g I Ł gG .t/ ! G. f / D G I . f / Ð G G . f /
(18.124)
As
GI. f / D 1
2 sinc. f T /e j³ f T
(18.125)
2 . f =K /2
G G . f / D e2³
18.6. Gaussian MSK (GMSK) 1231
0.6
0.5
0.4
T g(t)
0.3
B T=∞
t
Bt T = 0.5
0.2
Bt T = 0.3
Bt T = 0.1
0.1
0
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
t/T
Figure 18.46. Overall pulse g.t/ D gI Ł gG .t/, with amplitude normalized to 1=T, for various
values of the product Bt T.
it follows that
2 . f =K /2
G. f / D 1
2 sinc. f T /e2³ e j³ f T (18.126)
In Figure 18.47, the behavior of the phase deviation of a GMSK signal with Bt T D 0:3
is compared with the phase deviation of an MSK signal; note that in both cases the phase
is continuous, but for GMSK we get a smoother curve, without discontinuities in the slope.
Possible trajectories of the phase deviation for Bt T D 0:3 and Bt T D 1 are illustrated
in Figure 18.48.
Values of e j1'.t/ at the decision instants T C kT are illustrated in Figure 18.49 for a
GMSK signal with Bt T D 0:3.
B T = 0.3
t
4
2
∆ ϕ (t)
−2
−4
2 4 6 8 10 12 14 16
Bt T = ∞
4
2
∆ ϕ (t)
−2
−4
2 4 6 8 10 12 14 16
t/T
Figure 18.47. Phase deviation 1' of a GMSK signal for Bt T D 0:3, compared with the phase
deviation of an MSK signal.
0.8
0.6
0.4
0.2
∆ϕ (t) / π
−0.2
−0.4
−0.6
−0.8
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t/T
Figure 18.48. Trajectories of the phase deviation of a GMSK signal for Bt T D 0:3 (solid line)
and Bt T D 1 (dotted line).
18.6. Gaussian MSK (GMSK) 1233
0.8
0.6
0.4
0.2
Q
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
I
10
B T=0.3
t
0
BtT=0.5
B T=1.0
−10 t
BtT=∞
−20
−30
PSD (dB)
−40
−50
−60
−70
−80
0 0.5 1 1.5 2 2.5 3 3.5 4
fT
Figure 18.50. Estimate of the power spectral density of a GMSK signal for various values
of Bt T.
1234 Chapter 18. Modulation techniques for wireless systems
Configuration I
The first configuration, illustrated in Figure 18.51, is in the analog domain; in particular, an
analog low pass Gaussian filter is employed. As shown in [1], it is possible to implement
a good approximation of the Gaussian filter by resorting to simple devices, such as lattice
LC filters. However, the weak point of this scheme is represented by the VCO, because in
an open loop the voltage/frequency VCO characteristic is non-linear and as a consequence
the modulation index can vary even by a factor 10 in the considered frequency range.
Configuration II
The second configuration is represented in Figure 18.52. The digital filter g.nTQ / that
approximates the analog filter g.t/ is designed by the window method [4, pag. 444]. For
an oversampling factor of Q 0 D 8, letting TQ D T =Q 0 , we consider four filters that are
obtained by windowing the pulse g.t/ to intervals .0; T /, .T =2; 3T =2/, .T; 2T /, and
.3T =2; 5T =2/, respectively; the coefficients of the last filter are listed in Table 18.4, using
the fact that g.nTQ / has even symmetry with respect to the peak at 4TQ D T =2.
A comparison among the frequency responses of the four discrete-time filters and the
continuous-time filter is illustrated in Figure 18.53. For a good approximation to the analog
filter, the possible choices are limited to the two FIR filters with 23 and 31 coefficients,
with support .T; 2T / and .3T =2; 5T =2/, respectively. From now on we will refer to
the filter with 31 coefficients.
We note from Figure 18.46 that, for Bt T ½ 0:3, most of the pulse energy is contained
within the interval .T; 2T /; therefore the effect of interference does not extend over more
than three symbol periods. With reference to Figure 18.52, the filter g is an interpolator filter
ak xn s(t)
g DAC VCO
T TQ
g.nTQ / value
g.4TQ / 0.37119
g.5TQ / 0.36177
g.6TQ / 0.33478
g.7TQ / 0.29381
g.8TQ / 0.24411
g.9TQ / 0.19158
g.10TQ / 0.14168
g.11TQ / 0.09850
g.12TQ / 0.06423
g.13TQ / 0.03921
g.14TQ / 0.02236
g.15TQ / 0.01189
g.16TQ / 0.00589
g.17TQ / 0.00271
g.18TQ / 0.00116
g.19TQ / 0.00046
Figure 18.53. Frequency responses of g.t/ and g.nTQ /, for TQ D T=8, and various lengths of
the FIR filters.
1236 Chapter 18. Modulation techniques for wireless systems
Configuration III
The weak point of the previous scheme is again represented by the analog VCO; thus it
is convenient to partially implement in the digital domain also the frequency modulation
stage.
Real-valued scheme. The real-valued scheme is illustrated in Figure 18.54. The samples
fu n g are given by
More simply, let xn D x f .nTQ / and X n D X n1 Cxn ; then it follows that 1'n D ³ TQ X n .
Therefore in (18.129) 'n , with carrier frequency f 1 , becomes
. f1 / N1 . f1 / N1
'n D 2³ n C ³ TQ X n D 'n1 C 2³ C ³ TQ x n (18.131)
N2 N2
.f /
that is the value 'n 1 is obtained by suitably scaling the accumulated values of xn . To
.f /
obtain u n , we map the value of 'n 1 into the memory address of a RAM which contains
values of the cosine function (see Figure 18.55).
Obviously the size of the RAM depends on the accuracy with which u n and 'n are
quantized.3 We note that u.nTQ / is a real-valued passband signal, with spectrum centered
around the frequency f 1 ; the choice of f 1 is constrained by the bandwidth of the signal
u.nTQ /, equal to about 1:5=T , and also by the sampling period, chosen in this example
equal to T =8; then it must be
31 31 4
C f1 > 0 C f1 < (18.132)
4T 4T T
or 3=.4T / < f 1 < 13=.4T /. A possible choice is f 1 D 1=.4TQ / assuming N1 D 1 and
N2 D 4. With this choice we have a image spacing/signal bandwidth ratio equal to 4/3.
Moreover, cos.'n / D cos.2.³=4/n C 1'n / becomes cos..³=2/n C 1'n /, which in turn
is equal to š cos.1'n / for n even and š sin.1'n / for n odd. Therefore the scheme of
Figure 18.54 can be further simplified.
3 To avoid quantization effects, the number of bits used to represent the accumulated values is usually much
larger than the number of bits used to represent 'n . In practice 'n coincides with the most significant bits of
the accumulated values.
1238 Chapter 18. Modulation techniques for wireless systems
.bb/
and sQ are the in phase and quadrature components. Then we obtain
With respect to the real-valued scheme, we still have a RAM which stores the values of
the cosine function, but two DACs are now required.
X
C1
³
s .bb/ .t/ D e j 2 6iD1 ai h T x .t kTb /
k
kD1
(18.135)
X
C1
D j kC1
ck h T x .t kTb / ck D ck1 ak
kD1
where h T x .t/ is a suitable real-valued pulse that depends on the parameter Bt T and has
a support equal to .L C 1/Tb , if L Tb is the support of g.t/. For example, we show in
Figure 18.57 the plot of h T x for a GMSK signal with Bt T D 0:3.
The linearization of s .bb/ , that leads to interpreting GMSK as a QAM extension of MSK
with a different transmit pulse, is very useful for the design of the optimum receiver, which
is the same as for QAM systems. Figure 18.58 illustrates the linear approximation of the
GMSK model.
As for MSK, also for GMSK it is useful to differentially (pre)code the data fak g if a
coherent demodulator is employed.
0.9
0.8
0.7
0.6
hTx(t)
0.5
0.4
0.3
0.2
0.1
0
−1 0 1 2 3 4 5
t/T
b
j k+1
(bb)
ak c k = c k-1 ak ck c k j k+1 s (t)
hT x
Figure 18.58. Linear approximation of a GMSK signal. hTx is a suitable pulse which depends
on the value of Bt T.
Modulation system c
MSK 2.0
GMSK, Bt T D 0:5 1.93
GMSK, Bt T D 0:3 1.78
³=4-DQPSK, 8² 1.0
where the coefficient c assumes the values given in Table 18.5 for four modulation systems.
The plots of Pbit;Ch for the various cases are illustrated in Figure 18.59.
As usual, if the data fak g are differentially (pre)coded, Pbit D Pbit;Ch holds, otherwise
the relation (18.113) holds. From now on we assume that a differential precoder is employed
in the presence of coherent demodulation.
1240 Chapter 18. Modulation techniques for wireless systems
0
10
MSK BtT=+∞
GMSK BtT=0.5
−1
10
GMSK BtT=0.3
π/4−DQPSK
−2
10
Pbit,ch
−3
10
−4
10
−5
10
−6
10
0 2 4 6 8 10 12 14
Γ (dB)
Figure 18.59. Pbit,Ch as a function of 0 for the four modulation systems of Table 18.5.
For the case Bt T D 0:3, and for an ideal AWGN channel, in Figure 18.60 we also give
the performance obtained for a receive filter g Rc of the Gaussian type [7, 8], whose impulse
response is given in (18.118), where the 3 dB bandwidth is now denoted by Br . Clearly the
optimum value of Br T depends on the modulator type and in particular on Bt T . System
performance is evaluated using a 4-state VA or a threshold detector (TD). The VA uses an
estimated overall system impulse response obtained by the linear approximation of GMSK.
The Gaussian receive filter is characterized by Br T D 0:3, chosen for best performance.
We observe that the VA gains a fraction of dB as compared to the TD; furthermore, the
performance is slightly better than the approximation (18.136).
−1
10
TD
VA
−2
10
Pbit
−3
10
−4
10
0 1 2 3 4 5 6 7 8 9
Γ (dB)
Figure 18.60. Pbit as a function of 0 for a coherently demodulated GMSK (Bt T D 0:3), for
an ideal channel and Gaussian receive filter with Br T D 0:3. Two detectors are compared:
1) four-state Viterbi algorithm and 2) threshold detector.
Figure 18.61. Eye diagram at the decision point of the 1BDD for a GMSK system, for an ideal
channel and without the filter gA : (a) Bt T D C1, (b) Bt T D 0:5, (c) Bt T D 0:3.
1242 Chapter 18. Modulation techniques for wireless systems
Figure 18.62. Pbit as a function of 0 obtained with the 1BDD for GMSK, for an ideal channel
and Gaussian receive filter having a normalized bandwidth Br T.
Bt T D 0:3 requires an increment of 7.8 dB. In Figure 18.62 we also show for comparison
purposes the performance of ³=4-DQPSK with a receive Gaussian filter.
The performance of another widely used demodulator, LDI (see Section 18.2.3), is quite
similar to that of 1BDD, showing a substantial equivalence between the two non-coherent
demodulation techniques applied to GMSK and ³=4-DQPSK.
Comparison. Always for an ideal AWGN channel, a comparison among the various mod-
ulators and demodulators is given in Table 18.6. As a first observation, we note that a
coherent receiver with an optimized Gaussian receive filter provides the same performance,
Table 18.6 Required values of 0, in dB, to achieve a Pbit D 103 for various
modulation and demodulation schemes.
Modulation Demodulation
coherent coherent differential
(MAP) (g A gauss. + TD) (g A gauss. + 1BDD)
³=4-DQPSK o QPSK (² D 0:3) 9.8 9.8 (² D 0:3) 12.5 (Br T D 0:5)
MSK 6.8 6.8 (Br T D 0:25) 10.3 (Br T D 0:5)
GMSK (Bt T D 0:5) 6.9 6.9 (Br T D 0:3) 12.8 (Br T D 0:625)
GMSK (Bt T D 0:3) 7.3 7.3 (Br T D 0:3) 18.1 (Br T D 0:625)
18.6. Gaussian MSK (GMSK) 1243
evaluated by the approximate relation (18.136), of the MAP criterion. Furthermore, we note
that for GMSK with Bt T 0:5, because of strong ISI, a differential receiver undergoes a
substantial penalty in terms of 0 to achieve a given Pbit with respect to a coherent receiver;
this effect is mitigated by canceling ISI by suitable equalizers [9].
Another method is to detect the signal in the presence of ISI, in part due to the channel
and in part to the differential receiver, by the Viterbi algorithm. Substantial improvements
with respect to the simple threshold detector are obtained, as shown in [10].
In the previous comparison between amplitude modulation (³=4-DQPSK) and phase
modulation (GMSK) schemes we did not take into account the non-linearity introduced by
the power amplifier (see Section 4.8), which leads to: 1) signal distortion and 2) spectral
spreading that creates interference in adjacent channels. Usually, the latter effect is dominant
and is controlled using a HPA with a back-off that can be even of several dB. In some
cases signal predistortion before the HPA allows a decrease of the OBO.
Overall, the best system is the one that achieves, for the same Pbit , the smaller value of
.0/d B C .OBO/d B (18.137)
In other words (18.137), for the same Pbit , additive channel noise and transmit HPA, selects
the system for which the transmitted signal has the lowest power. Obviously in (18.137) 0
depends on the OBO; at high frequencies, where the HPA usually introduces large levels of
distortion, the OBO for a linear modulation scheme may be so large that a phase modulation
scheme may be the best solution in terms of (18.137).
100
90 VA(32)
MF+DFE(13,7)
80
70
P cdf
60
bit
50
VA(32)
MF+DFE(13,7)
40
30 VA(32): Γ=15 dB
MF+DFE(13,7): Γ=15 dB
VA(32): Γ=10 dB
20 MF+DFE(13,7): Γ=10 dB
−3 −2 −1
10 10 10
P
bit
Figure 18.63. Comparison between Viterbi algorithm and DFE preceded by a MF for the
multipath channel EQ6, in terms of BER cdf.
Bibliography
with
X
C1
x f .t/ D ak g.t kT / (18.140)
kD1
where g.t/ is called instantaneous frequency pulse. In general, the pulse g.t/ satisfies the
following properties:
limited duration g.t/ D 0 for t < 0 and t > L T (18.141)
with
Z t
q.t/ D g.− / d− (18.146)
1
18.A. Continuous phase modulation (CPM) 1247
1/2 PSK
t
T 2T
1/2 CPFSK
t
T 2T
1/2 BFSK
t
T 2T
The pulse q.t/ is called phase response pulse and represents the most important part of
the CPM signal because it indicates to what extent each information symbol contributes to
the overall phase deviation. In Figure 18.64 the phase response pulse q.t/ is plotted for
PSK, CPFSK, and BFSK. In general the maximum value of the slope of the pulse q.t/
is related to the width of the main lobe of the PSD of the modulated signal s.t/, and the
number of continuous derivatives of q.t/ influences the shape of the secondary lobes.
In general, information symbols fak g belong to an M-ary alphabet A, that for M even
is given by fš1; š3; : : : ; š.M 1/g. The constant h is called modulation index and de-
termines, together with the dimension of the alphabet, the maximum phase variation in a
symbol period, equal to .M 1/h³ . By changing q.t/ (or g.t/), h, and M, we can generate
several continuous phase modulation schemes. The modulation index h is always given by
the ratio of two integers, h D `= p, because this implies that the phase deviation, evaluated
modulo 2³ , assumes values in a finite alphabet. In fact, we can write
X
k
1'.t; a/ D 2³ h ai q.t i T / C kL kT t < .k C 1/T (18.147)
i DkLC1
with
" #
X
` kL
kL D ³ ai (18.148)
p i D1
mod 2³
kL is called phase state; it represents the overall contribution given by the symbols
: : : ; a1 ; a0 ; a1 ; : : : ; akL to the phase duration in the interval [kT; .k C 1/T /, and can
only assume 2 p distinct values. The first term in (18.147) is called corrective state and,
because it depends on L symbols akLC1 ; : : : ; ak , at a certain instant t it can only assume
M L1 distinct values. The phase deviation is therefore characterized by a total number of
values equal to 2 pM L1 .
CPM schemes with L D 1, i.e. CPFSK, are called full response schemes and have a
reduced complexity with a number of states equal to 2 p; schemes with L 6D 1 are instead
1248 Chapter 18. Modulation techniques for wireless systems
called partial response schemes. Because of the memory in the modulation process a partial
response scheme allows a trade-off between the error probability at the receiver and the
shaping of power spectra of the modulated signal. However, this advantage is obtained at
the expense of a greater complexity of the receiver. For this reason the modulation index
is usually a simple rational number as 1; 1=2; 1=4; 1=8.
Advantages of CPM
The popularity of the continuous phase modulation technique derives first of all from the
constant envelope property of the CPM signal; in fact, a signal with a constant envelope
allows using very efficient power amplifiers. In the case of linear modulation techniques,
as QAM or OFDM, it is necessary to compensate for the non-linearity of the amplifier by
predistortion or to decrease the average power in order to work in linear conditions.
Before the introduction of TCM, it was believed that source and/or channel coding would
allow an improvement in performance only at the expense of a loss in transmission effi-
ciency, and hence would require a larger bandwidth; CPM permits both good performance
and highly efficient transmission. However, one of the drawbacks of the CPM is the im-
plementation complexity of the optimum receiver, especially in the presence of dispersive
channels.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Chapter 19
In this chapter we describe the design of two high speed data transmission systems over
unshielded twisted pair cables [1, 2].
c 1995 IEEE.]
Figure 19.1. Block diagram of a QPR-IV transceiver. [From [1],
Figure 19.2. Overall analog channel considered for the joint optimization of the analog
c 1995 IEEE.]
transmit (ATF) and receive (ARF) filters. [From [1],
complexity, the transfer functions of these filters are first expressed in terms of poles and
zeros; the pole-zero configurations are then jointly optimized by simulated annealing as
described in [3]. The cost function that is used for the optimization reflects two criteria:
a) the mean-square error between the impulse response h.t; L ; u c / for a cable length equal
to 50 m, and the ideal PR-IV response must be below a certain value and b) spectral
components of the transmitted signal above 30 MHz should be well suppressed to achieve
compliance with regulations on radiation limits. Investigations with various orders of the
respective transfer functions have shown that a good approximation of an ideal PR-IV
response is obtained with 5 poles and 3 zeros for the ATF, and 3 poles for the ARF.
X
C1 X
C1
x.t/ D ak h[t kT; L ; u c .L/] C akN h N [t kT; L ; u c .L/] C w R .t/ (19.1)
kD1 kD1
19.1. Design of a quaternary partial response class-IV system 1251
where fak g and fakN g are the sequences of quaternary symbols generated by the local trans-
mitter and remote transmitter, respectively, h N [t; L ; u c .L/] is the NEXT channel response
and w R .t/ is additive Gaussian noise. The signal x.t/ is sampled by the ADC that operates
synchronously with the DAC at the modulation rate of 1=T D 62:5 MBaud. The adjustment
of the AGC circuit is computed digitally, using the sampled signal x k D x.kT /, such that
the ADC output signal achieves a constant average statistical power M R , i.e.,
þ Z þ
2 þ 1 T þ
E[x k ]þ D E[x .t/] dt þþ
2
D MR (19.2)
u c Du c .L/ T 0 u c Du c .L/
This ensures that the received signal is converted with optimal precision independently of
the cable length; moreover, a controlled level of the signal at the adaptive digital equalizer
input is required for achieving optimal convergence properties.
Decorrelation filter
After NEXT cancellation, the signal is filtered by a decorrelation filter, which is used to
improve the convergence properties of the adaptive digital equalizer by reducing the corre-
lation between the samples of the sequence fxQk g. The filtering operation performed by the
DCF represents an approximate inversion of the PR-IV frequency response. The DCF has
frequency response 1=.1þ e j4³ f T /, with 0 < þ < 1, and provides at its output the signal
Adaptive equalizer
The samples fz k g are stored in an elastic buffer, from which they are transferred into
the equalizer delay line. Before describing this operation in more detail, we make some
observations about the adaptive equalizer. As mentioned above, the received signal x.t/
is sampled in synchronism with the timing of the local transmitter. Due to the frequency
offset between local and remote transmitter clocks, the phase of the remote transmitter
clock will drift in time relative to the sampling phase. As the received signal is bandlimited
to one half of the modulation rate, signal samples taken at the symbol rate are not affected
by aliasing; hence, a fractionally-spaced equalizer is not required for a QPR-IV system.
Furthermore, as the signal value x.t/ can be reconstructed from the T -spaced samples fz k g,
an equalizer of sufficient length acts also as an interpolator. The adaptive equalizer output
signal is given by
NX
E 1
E
yk D ci;k z ki (19.6)
i D0
E g, i D 0; : : : ; N 1, denote the filter coefficients.
where fci;k E
where eOk D yk .aO k aO k2 / is the error obtained using tentative decisions aO k on the
transmitted quaternary symbols, and ¼ E is the adaptation gain.
c 1995 IEEE.]
Figure 19.3. Convergence of the adaptive NEXT canceller. [From [1],
1254 Chapter 19. Design of high speed transmission systems
Figure 19.4. Convergence of the adaptive equalizer for (a) best-case sampling phase and (b)
c 1995 IEEE.]
worst-case sampling phase. [From [1],
19.1. Design of a quaternary partial response class-IV system 1255
Figure 19.5. Convergence of the adaptive equalizer for worst-case timing phase drift. [From
c 1995 IEEE.]
[1],
is also indicated. For the simulations, the NEXT canceller was assumed to be realized in
the distributed-arithmetic form (see Section 16.1).
The convergence of the MSE at the output of the equalizer in the absence of timing phase
drift is shown in Figure 19.4a and b for best and worst-case sampling phase, respectively,
and a value 0T x D 2E T x =N0 of 43 dB, where E T x is the average energy per modulation
interval of the transmitted signal. The NEXT canceller is assumed to have converged to
the optimum setting. An equalizer length of N E D 24 is chosen, which guarantees that
the mean-square interpolation error with respect to an ideal QPR-IV signal is less than
25 dB for the worst-case sampling phase. The self-training period is TST ³ 400 µs,
corresponding to approximately 25000T . The adaptation gains for self-training and decision
directed adjustment have the same value ¼ E D 29 , that is chosen for best performance
in the presence of a worst-case timing phase drift Ž D 104 . Figure 19.5 shows the mean-
square error convergence curves obtained for Ž D 104 .
from the received signal, look-up values are selected by the bits in the NEXT canceller
delay line and added by a carry-save adder. By segmenting the delay line of the NEXT
canceller into sections of shorter lengths, a trade-off concerning the number of operations
per modulation interval and the number of memory locations that are needed to store the
look-up values is possible. The convergence of the look-up values to the optimum setting
is achieved by an LMS algorithm.
If the delay line of the NEXT canceller is segmented into L sections with K D N N =L
delay elements each, the NEXT canceller output signal is given by
NX
N 1 X X
L1 K 1
uO kN D N
aki N
ci;k D N
ak`K N
m c`K Cm;k (19.8)
i D0 `D0 mD0
where bk.w/ D .2ak.w/ 1/ 2 f1; C1g. Introducing (19.9) into (19.8) we obtain (see (16.12))
" #
X
L1 X
1 X
K 1
.w/
uO kN D 2w N
bk`K m c`K Cm;k (19.10)
`D0 wD0 mD0
Equation (19.10) suggests that the filter output can be computed using a set of L2 K look-up
values that are stored in L look-up tables with 2 K memory locations each. Extracting the
.w/
term bk`K out of the square bracket in (19.10), to determine the output of a distributed-
arithmetic filter with reduced memory size L2 K 1 we rewrite (19.10) as (see (16.13))
X
L1 X
1
.w/ .w/
uO kN D 2w bk`K dkN .i k;` ; `/ (19.11)
`D0 wD0
.w/
where fdkN .n; `/g, n D 0; : : : ; 2 K 1 1, ` D 0; : : : ; L 1, are the look-up values, and i k;`
denotes the selected look-up address that is computed as follows:
8
>
> X
K 1
.w/ .w/
>
> a 2m1 if ak`K D 1
>
< mD1 k`K m
.w/
i k;` D (19.12)
>
> X
K 1
>
> .w/ m1 .w/
>
: aN k`K m 2 if ak`K D0
mD1
where aN n is the one’s complement of an . The expression of the LMS algorithm to update
the look-up values of a distributed-arithmetic NEXT canceller takes the form
X
1
.w/
N
dkC1 .n; `/ D dkN .n; `/ C ¼ N xQk 2w bk`K Žni .w/
k;`
wD0 (19.13)
n D 0; : : : ; 2 K 1 1 ` D 0; : : : ; L 1
19.1. Design of a quaternary partial response class-IV system 1257
where Žn is the Kronecker delta. We note that at each iteration only those look-up values
that are selected to generate the filter output are updated. The implementation of the NEXT
canceller is further simplified by updating at each iteration only the look-up values that are
addressed by the most significant bits of the symbols, i.e. those with index w D 1, stored in
the delay line (see (16.23)). The block diagram of an adaptive distributed-arithmetic NEXT
canceller is shown in Figure 19.6. In the QPR-IV transceiver, for the implementation of a
NEXT canceller with a time span of 48T , L D 16 segments with K D 3 delay elements each
are employed. The look-up values are stored in 16 tables with four 16-bit registers each.
(0) (1)
ak(0) ak(1) a k−(L−1)K a k−(L−1)K
address address
computation computation
(1) (0) (1) (0)
i k,0 ik,0 i k,L−1 i k,L−1
µ~
xk µ x~k
table table
N (1)
d k+1 (i k,0 ,0) 0 L−1
+ +
+ (1) (0)
+
d kN (i k,0 ,0) d kN (i k,0 ,0) (1)
d kN (i k,L−1 ,L−1) (0)
d kN (i k,L−1 ,L−1)
← ← (1)
1 +1 bk(1) 1 +1 b k−(L−1)K
← ←
0 −1 0 −1
← ← (0)
1 +1 bk(0) 1 +1 b k−(L−1)K
← ←
0 −1 0 −1
2 2
~x u^kN
k xk
c 1995 IEEE.]
Figure 19.6. Adaptive distributed-arithmetic NEXT canceller. [From [1],
1258 Chapter 19. Design of high speed transmission systems
This MAC unit is then reset and its output will be considered again N E time instants later.
Figure 19.7 depicts the implementation of the digital equalizer. The N E coefficients
fciE g, i D 0; : : : ; N E 1, normally circulate in the delay line shown at the top of the
figure. Except when recentering of the equalizer coefficients is needed, N E coefficients in
the delay line are presented each to a different MAC unit, and the signal sample z k is input
to all MAC units. At the next time instant, the coefficients are cyclically shifted by one
position and the new signal sample z kC1 is input to all the units. The multiplexer shown
at the bottom of the figure selects in turn the MAC unit that provides the equalizer output
signal.
To explain the operations for recentering of the equalizer coefficients, we consider as an
example a simple equalizer with N E D 4 coefficients; in Figure 19.8, where for simplicity
the coefficients are denoted by fc0 ; c1 ; c2 ; c3 g, the coefficients and signal samples at the
input of the 4 MAC units are given as a function of the time instant. At time instant k,
the output of the MAC unit 0 is selected, at time instant k C 1 the output of the MAC
unit 1 is selected, and so on. For a negative frequency offset between the local and remote
transmitter clocks, a recentering operation corresponding to a left shift of the equalizer
coefficients occasionally occurs as illustrated in the upper part of Figure 19.8. We note that
as a result of this operation, a new coefficient c4 , initially set equal to zero, is introduced.
We also note that signal samples with proper delay need to be input to the MAC units. A
similar operation occurs for a right shift of the equalizer coefficients, as illustrated in the
lower part of the figure; in this case a new coefficient c1 , initially set equal to zero, is
introduced. In the equalizer implementation shown in Figure 19.7, the control operations
to select the filter coefficients and the signal samples are implemented by the multiplexer
19.1. Design of a quaternary partial response class-IV system 1259
10
c NE c0E c1E cE cE
11 E −1 NE −3 NE −2
01 + + + + A C B
00
MUXC
Nu coefficient
updating terms
z k−1
zk
1 0 10 1 0 10 10
z k(0)
MUXS0 MUXS1 MUXS2 MUXS N MUXS NE −1
E −2
(1)
zk
MUX
y
k
Figure 19.7. Digital adaptive equalizer: coefficient circulation and updating, and computation
c 1995 IEEE.]
of output signal. [From [1],
MUXC at the input of the delay line and by the multiplexers MUXS(0), : : : , MUXS(N E 1)
at the input of the MAC units, respectively. A left or right shift of the equalizer coefficients
is completed in N E cycles. To perform a left shift, in the first cycle the multiplexer MUXC
is controlled so that a new coefficient c NE E D 0 is inserted into the delay line. During
the following .N E 1/ cycles, the input of the delay line is connected to point B. After
inserting the coefficient c1E at the N E -th cycle, the input of the delay line is connected
to point C and normal equalizer operations are restored. For a right shift, the multiplexer
MUXC is controlled so that during the first N E 1 cycles the input of the delay line is
connected to point A. A new coefficient c1 E D 0 is inserted into the delay line at the
N E -th cycle and normal equalizer operations are thereafter restored. At the beginning of
the equalizer operations, the equalizer coefficients are initialized by inserting the sequence
f0; : : : ; 0; C1; 0; 1; 0; : : : ; 0g into the delay line.
The adaptation of the equalizer coefficients in decision-directed mode is performed ac-
cording to the LMS algorithm (19.7). However, to reduce implementation complexity,
equalizer coefficients are not updated at every cycle; during normal equalizer operations,
each coefficient is updated every N E =NU cycles by adding correction terms at NU equally
spaced fixed positions in the delay line, as shown in Figure 19.9. The architecture adopted
for the computation of the correction terms is similar to the architecture for the computation
1260 Chapter 19. Design of high speed transmission systems
Figure 19.8. Coefficients and signals at the input of the multiply-accumulate (MAC) units
c 1995 IEEE.]
during coefficient shifting. [From [1],
z (0)
k
10 10 10 10 10
00 00 00 00 00
01 01 01 01 01
z (1)
k
^e
k
MUX
Figure 19.9. Adaptive digital equalizer: computation of coefficient adaptation. [From [1],
c 1995 IEEE.]
input to the NU adders in the delay line where the equalizer coefficients are circulating, as
illustrated in Figure 19.7.
Timing control
An elastic buffer is provided at the boundary between the transceiver sections that operate at
the transmit and receive timings. The signal samples at the output of the DCF are obtained
at a rate that is given by the transmit timing and stored at the same rate into the elastic
buffer. Signal samples from the elastic buffer are read at the same rate that is given by the
receive timing. The VCO that generates the receive timing signal is controlled in order to
prevent buffer underflow or overflow.
Let WPk and RPk denote the values of the two pointers that specify the write and read
addresses, respectively, for the elastic buffer at the k-th cycle of the receiver clock. We
consider a buffer with eight memory locations, so that WPk ; RPk 2 f0; 1; 2; : : : ; 7g. The
write pointer is incremented by one unit at every cycle of the transmitter clock, while the
read pointer is also incremented by one unit at every cycle of the receiver clock.
The difference pointer,
DPk D WPk RPk .mod 8/ (19.15)
1262 Chapter 19. Design of high speed transmission systems
is used to generate a binary control signal 1k 2 fš1g that indicates whether the frequency
of the VCO must be increased or decreased:
8
>
> C1 if DPk D 4; 5
<
1k D 1 if DPk D 2; 3 (19.16)
>
>
:
1k1 otherwise
The signal 1k is input to a digital loop filter which provides the control signal to adjust
the VCO. If the loop filter comprises both a proportional and an integral term, with corre-
sponding gains of ¼− and ¼1− , respectively, the resulting second-order phase-locked loop
is described by (see Section 14.7)
−kC1 D −k C ¼− 1k C 1−k
(19.17)
1−kC1 D 1−k C ¼1− 1k
where −k denotes the difference between the phases of the transmit and receive timing
signals. With a proper setting of the gains ¼− and ¼1− , the algorithm (19.17) allows for
correct initial frequency acquisition of the VCO and guarantees that the write and read
pointers do not overrun each other during steady-state operations.
For every time instant k, the two consecutive signal samples stored in the memory
locations with the address values RPk and .RPk 1/ are read and transferred to the
equalizer. These signal samples are denoted by z k and z k1 in Figure 19.10. When a
recentering of the equalizer coefficients has to take place, for one cycle of the receiver
clock the read pointer is either not incremented (left shift), or incremented by two
units (right shift). These operations are illustrated in the figure, where the elastic buffer
is represented as a circular memory. We note that by the combined effect of the
timing control scheme and the recentering of the adaptive equalizer coefficients the
frequency of the receive timing signal equals on average the modulation rate at the
remote transceiver.
Viterbi detector
For the efficient implementation of near MLSD of QPR-IV signals, we consider the reduced-
state Viterbi detector of Example 8.12.1 on page 687. In other words, the signal samples
at the output the of .1 D 2 / partial response channel are viewed as being generated
by two interlaced .1 D 0 / dicode channels, where D 0 D D 2 corresponds to a delay
of two modulation intervals, 2T . The received signal samples are hence deinterlaced
into even and odd time-indexed sequences. The Viterbi algorithm using a 2-state trellis
is performed independently for each sequence. This reduced-state Viterbi algorithm re-
tains at any time instant k only the two states with the smallest and second smallest
metrics and their survivor sequences, and propagates the difference between these met-
rics instead of two metrics. Because the minimum distance error events in the partial-
response trellis lead to quasi-catastrophic error propagation, a sufficiently long path memory
depth is needed. A path memory depth of 64T has been found to be appropriate for
this application.
19.2. Design of a dual duplex transmission system at 100 Mbit/s 1263
Normal operation
WPk WPk+1
WP k+2
RPk+2
z k−1 z k+2
RPk+1
RPk
zk z k+1 zk z k+1
Shift left
WPk WPk+1
WP k+2
Shift right
WPk+1
WPk RPk+2
z k+3 WP k+2
RPk+1
zk z k+1
c 1997 IEEE.]
Figure 19.11. Dual duplex transmission over two wire pairs. [From [2],
from other transmissions in multi-pair cables and far-end cross-talk (FEXT), although nor-
mally not very significant, requires specific structures (see, for example, Section 16.4).
To achieve best performance for data transmission over UTP-3 cables, signal bandwidth
must be confined to frequencies not exceeding 30 MHz. As shown in Chapter 17, this
restriction is further mandated by the requirement to meet FCC and CENELEC class B limits
on emitted radiation from communication systems. These limits are defined for frequencies
above 30 MHz. Twisted pairs used in UTP-3 cables have fewer twists per unit of length
and generally exhibit a lower degree of homogeneity than pairs in UTP-5 cables; therefore
transmission over UTP-3 cables produces a higher level of radiation than over UTP-5
cables. Thus it is very difficult to comply with the class B limits if signals containing
spectral components above 30 MHz are transmitted over UTP-3 cables.
As illustrated in Figure 19.11, for 100BASE-T2 a dual duplex baseband transmission
concept was adopted. Bidirectional 100 Mbit/s transmission over two pairs is accomplished
by full duplex transmission of 50 Mbit/s streams over each of two wire pairs. The lower
modulation rate and/or spectral modulation efficiency required per pair for achieving the
100 Mbit/s aggregate rate represents an obvious advantage over mono duplex transmission,
where one pair would be used to transmit only in one direction and the other to transmit
only in the reverse direction. Dual duplex transmission requires two transmitters and two
receivers at each end of a link, as well as separation of the simultaneously transmitted and
received signals on each wire pair. Sufficient separation cannot be accomplished by analog
hybrid circuits only. In 100BASE-T2 transceivers it is necessary to suppress residual echoes
returning from the hybrids and impedance discontinuities in the cable as well as self NEXT
by adaptive digital echo and NEXT cancellation. Furthermore, by sending transmit signals
with nearly 100% excess bandwidth, received 100BASE-T2 signals exhibit spectral redun-
dancy that can be exploited to mitigate the effect of alien NEXT by adaptive digital equal-
ization. It will be shown later in this chapter that, for digital NEXT cancellation and equal-
ization as well as echo cancellation in the case of dual-duplex transmission, dual-duplex and
mono-duplex schemes require a comparable number of multiply-add operations per second.
19.2. Design of a dual duplex transmission system at 100 Mbit/s 1265
Auto-Negotiation Process
TRAINING STATE:
blind receiver training, followed
by decision-directed training; loc_rcvr_status=OK
rem_rcvr_status=OK
send idle
loc_rcvr_status=NOT_OK loc_rcvr_status=OK
rem_rcvr_status=NOT_OK
IDLE STATE:
decision-directed receiver
operation; send idle
rem_rcvr_status=NOT_OK rem_rcvr_status=OK
c 1997 IEEE.]
Figure 19.12. State diagram of 100BASE-T2 physical layer control. [From [2],
1266 Chapter 19. Design of high speed transmission systems
received from the remote transmitter would generally shift in phase relative to the also-
received echo and self-NEXT signals, as discussed in the previous section. To cope with
this effect some form of interpolation would be required, which can significantly increase
the transceiver complexity.
After auto-negotiation is completed, both 100BASE-T2 transceivers enter the TRAIN-
ING state. In this state a transceiver expects to receive an idle sequence and also
sends an idle sequence, which indicates that its local receiver is not yet trained
(loc_rcvr_status =NOT_OK). When proper local receiver operation has been
achieved by blind training and then by further decision-directed training, a transition to
the IDLE state occurs. In the IDLE state a transceiver sends an idle sequence expressing
normal operation at its receiver (loc_rcvr_status = OK) and waits until the received
idle sequence indicates correct operation of the remote receiver (rem_rcvr_status =
OK). At this time a transceiver enters the NORMAL state, during which data nibbles or idle
sequences are sent and received as demanded by the higher protocol layers. The remaining
transitions shown in the state diagram of Figure 19.12 mainly define recovery functions.
The medium independent interface (MII) between the 100BASE-T2 physical layer and
higher protocol layers is the same as for the other 10/100 Mbit/s IEEE 802.3 physical layers.
If the control line TX_EN is inactive, the transceiver sends an idle sequence. If TX_EN is
asserted, 4-bit data nibbles TXD(3:0) are transferred from the MII to the transmitter at the
transmit clock rate of 25 MHz. Similarly, reception of data results in transferring 4-bit data
nibbles RXD(3:0) from the receiver to the MII at the receive clock of 25 MHz. Control
line RX_DV is asserted to indicate valid data reception. Other control lines, such as CRS
(carrier sense) and COL (collision), are required for CSMA/CD specific functions.
ž the symbols 2; 1; 0; C1; C2 occur with probabilities 1/8, 1/4, 1/4, 1/4, 1/8, re-
spectively;
ž idle sequences and data sequences exhibit identical power spectral densities;
ž scrambler state, pair A and pair B assignment, and temporal alignment and polari-
ties of signals received on these pairs can easily be recovered from a received idle
sequence.
At the core of idle sequence generation and side-stream scrambling is a binary maximum-
length shift-register (MLSR) sequence f pk g (see Appendix 3.A) of period 233 1. One
new bit of this sequence is produced at every modulation interval. The transmitters in
the master and slave transceivers generate the sequence f pk g using feedback polynomials
g M .x/ D 1 C x 13 C x 33 and g S .x/ D 1 C x 20 C x 33 , respectively. The encoding operations
19.2. Design of a dual duplex transmission system at 100 Mbit/s 1267
are otherwise identical for the master and slave transceivers. From delayed elements f pk g
four derived bits are obtained at each modulation interval as follows:
x k D pk3 ý pk8
yk D pk4 ý pk6
(19.18)
ak D pk1 ý pk5
bk D pk2 ý pk12
where ý denotes modulo 2 addition. The sequences fx k g, fyk g, fak g, and fbk g represent
shifted versions of f pk g, that differ from f pk g and from each other only by large delays.
When observed in a constrained time window, the five sequences appear as mutually uncor-
related sequences. Figures 19.13 and 19.14 illustrate the encoding process for the idle mode
and data mode, respectively. Encoding is based in both cases on the generation of pairs of
two-bit vectors .Xk ; Yk /, .Sak ; Sbk /, and .Tak ; Tbk /, and Gray-code mapping of .Tak ; Tbk / into
symbol pairs .Dka ; Dkb /, where Dk 2 f2; 1; 0; C1g, D a; b. The generation of these
quantities is determined by the sequences fx k g, fyk g and f pk g, the even/odd state of the
time index k (equal to 2n or 2n C 1), and the local receiver status. Finally, pairs of transmit
symbols .akA ; akB / are obtained by scrambling the signs of .Dka ; Dkb / with the sequences
fak g and fbk g. In Figure 19.15, the symbol pairs transmitted in the idle and data modes are
depicted as two-dimensional signal points.
Master/Slave
bk 1
MLSR sequence S ←
generator ak 1 S: 0 ← +1
S 1 −1
*
x 2n+1 = x 2n+1
( loc_rcvr_status =OK)
pk
n:even n:odd
x 2n *
x 2n+1 X 2n+1 a
xk X2n Xk Sk = Tka a pk =0: a A ∋
1 Dk k
{−2,0,+2}
0 [1,0] 0 [0.1] M
akA
pk =1: ∋
1 [0.1] 1 [1,0] {−1,+1}
0
yk y 2n Y2n Y2n+1 Skb = Tkb b
Yk Dk pk =0: a B ∋
{−1,+1}
k
0 [1,0] [1,1] M
1 pk =1: akB
∋
{−2,0,+2}
1 [0.1] [0,0]
n:even even odd
←
[01] +1
∋ ←
M( X k ) {−1,+1} [00] 0
M: [10] ← (Gray code mapping)
∋ −1
M( Yk ) {−2,0} ←
[11] −2
Master/Slave
bk 1
MLSR sequence S ←
ak S: 0 −1
generator 1 ←
S 1 +1
pk
[TDXk (3),TDXk (2)]
akA
a ∋
xk Ska Tka Dk {−2,−1,0,+1,+2}
M
1 1/4 1
a kA a kA
1/4
−2 −1 0 1 2 −2 −1 0 1 2
−1 1/4 −1
−2 1/8 −2
2−D symbol probabilities: 1/8 1/16 2−D symbol probabilities: 1/16 1/32 1/64
Figure 19.15. Two-dimensional symbols sent during idle and data transmission.
We note that, in idle mode, if pk D 1 then symbols akA 2 Ax D f1; C1g and akB 2
A y D f2; 0; C2g are transmitted; if pk D 0 then akA 2 A y and akB 2 Ax are transmitted.
This property enables a receiver to recover a local replica of f pk g from the two received
quinary symbol sequences.
The associations of the two sequences with pair A and pair B can be checked, and a
possible temporal shift between these sequences can be corrected. Idle sequences have the
19.2. Design of a dual duplex transmission system at 100 Mbit/s 1269
further property that in every two-symbol interval with even and odd time indices k, two
symbols š1, one symbol 0, and one symbol š2 occur. The signs depend on the receiver
status of the transmitting transceiver and on the elements of the sequences fx k g, fyk g, fak g
and fbk g. Once a receiver has recovered f pk g, these sequences are known, and correct
signal polarities, the even/odd state of the time index k, and remote receiver status can be
determined.
In data mode, the two-bit vectors Sak and Sbk are employed to side-stream scramble
the data nibble bits. Compared to the idle mode, the signs of the transmitted symbols
are scrambled with opposite polarity. In the event that detection of the delimiter marking
transitions between idle mode and data mode fails due to noise, a receiver can neverthe-
less rapidly distinguish an idle sequence from a data sequence by inspecting the signs of
the two received š2 symbols. As mentioned above, during the transmission of idle se-
quences one symbol š2, i.e. with absolute value equal to 2, occurs in every two-symbol
interval.
The previous description does not yet explain the generation of delimiters. A start-of-
stream delimiter (SSD) indicates a transition from idle-sequence transmission to sending
packet data. Similarly, an end-of-stream delimiter (ESD) marks a transition from sending
packet data to idle sequence transmission. These delimiters consist of two consecutive
symbol pairs .akA D š2; akB D š2/ and .akC1 A B
D š2; akC1 D 0/. The signs of symbols
š2 in a SSD and in an ESD are selected opposite to the signs normally used in the idle
mode and data mode, respectively. The choice of these delimiters allows detection of mode
transitions with increased robustness against noise.
The principal signal processing functions performed in a 100BASE-T2 transmitter and re-
ceiver are illustrated in Figure 19.16. The digital-to-analog and analog-to-digital converters
operate synchronously, although possibly at different multiples of 25 MBaud symbol rate.
Timing recovery from the received signals, as required in slave transceivers, is not shown.
This function can be achieved, for example, by exploiting the strongly cyclostationary
nature of the received signals (see Chapter 14).
-10
Power spectral density [dB]
-20
-30
-40
-50
-60
-70
0 10 20 25 30 40 50 60 70 80 90 100
f [MHz]
Figure 19.17. Spectral template specified by the 100BASE-T2 standard for the power spectral
density of transmit signals and achieved power spectral density for a particular transmitter
implementation comprising a 5-tap digital transmit filter, 100 MHz D/A conversion, and a 3Ž
c 1997 IEEE.]
order Butterworth analog transmit filter. [From [2],
The use of forward equalizers with T =2-spaced coefficients serves two purposes. First,
as illustrated in Section 8.4, equalization becomes essentially independent of the sampling
phase. Second, when the received signals exhibit excess bandwidth, the superposition of
spectral input-signal components at frequencies f and f 1=T , for 0 < f < 1=.2T /,
in the T -sampled equalizer output signals, can mitigate the effects of synchronous in-
terference and asynchronous disturbances, as shown in Appendix 19.A. Interference sup-
pression achieved in this manner can be interpreted as a frequency diversity technique
[5]. Inclusion of the optional cross-coupling feedforward and backward filters shown
in Figure 19.16 significantly enhances the capability of suppressing alien NEXT. This
corresponds to adding space diversity at the expense of higher implementation com-
plexity. Mathematical explanations for the ability to suppress synchronous and asyn-
chronous interference with the cross-coupled forward equalizer structure are given in the
Appendix 19.A. This structure permits the complete suppression of the alien NEXT inter-
ferences stemming from another 100BASE-T2 transceiver operating in the same multi-pair
cable at identical clock rate. Alternatively, the interference from a single asynchronous
source, e.g. alien NEXT from 10BASE-T2 transmission over an adjacent pair, can also
be eliminated.
The 100BASE-T2 standard does not provide the transmission of specific training se-
quences. Hence, for initial receiver-filter adjustments, blind adaptation algorithms must be
employed. When the mean-square errors at the symbol-decision points reach sufficiently
low values, filter adaptation is continued in decision directed mode based on quinary symbol
1272 Chapter 19. Design of high speed transmission systems
decisions. The filter coefficients can henceforth be continuously updated by the LMS algo-
rithm to track slow variations of channel and interference characteristics.
The 100BASE-T2 Task Force adopted a symbol-error probability target value of 1010
that must not be exceeded under the worst-case channel attenuation and NEXT coupling
conditions when two 100BASE-T2 links operate in a four-pair UTP-3 cable, that are illus-
trated in Figure 4.23. During the development of the standard, the performance of candidate
100BASE-T2 systems has been extensively investigated by computer simulation. For the
scheme ultimately adopted, it was shown that by adopting time spans of 32T for the echo
and self NEXT cancellers, 12T for the forward filters, and 10T for the feedback filters, the
MSEs at the symbol-decision points remain consistently below a value corresponding to a
symbol-error probability of 1012 .
Filter complexity D time span ð input sampling rate ð output sampling rate
D number of coefficients ð output sampling rate (19.19)
D number of multiply-and-adds per second
Note that the time span of an FIR filter is given in seconds by the product of the number of
filter coefficients times the sampling period of the input signal. Transmission in a four-pair
cable environment with suppression of alien NEXT from a similar transceiver is considered.
Only the echo and self NEXT cancellers and forward equalizers will be compared. Updating
of filter coefficients will be ignored.
For dual duplex transmission, the modulation rate is 25 MBaud and signals are trans-
mitted with about 100% excess bandwidth. Echo and self NEXT cancellation requires four
FIR filters with time spans TC and input/output rates of 25 Msamples/s. For equalization
and alien NEXT suppression, four forward FIR filters with time spans TE , an input rate of
50 Msamples/s and an output rate of 25 Msamples/s are needed.
The modulation rate for mono duplex transmission is 1=T D 50 MBaud and signals are
transmitted with no significant excess bandwidth. Hence, both schemes transmit within a
comparable bandwidth ( 25 MHz). For an obvious receiver structure that does not allow
alien NEXT suppression, one self NEXT canceller with time span TC and input/output
rates of 50 Msamples/s, and one equalizer with time span TE and input/output rates of
50 Msamples/s will be needed. However, for a fair comparison, a mono duplex receiver
must have the capability to suppress alien NEXT from another mono duplex transmission.
This can be achieved by receiving signals not only from the receive pair but also in the
reverse direction of the transmit pair, and combining this signal via a second equalizer
with the output of the first equalizer. The additionally required equalizer exhibits the same
complexity as the first equalizer.
The filter complexities for the two schemes are summarized in Table 19.1. As the required
time spans of the echo and self NEXT cancellers and the forward equalizers are similar for
19. Bibliography 1273
the two schemes, it can be concluded that the two schemes have the same implementation
complexity. The arguments can be extended to the feedback filters. Finally, we note that
with the filter time spans considered in the preceding section (TC D 32T , TE D 12T and
TFb D 10T ), in a 100BASE-T2 receiver on the order of 1010 multiply-and-add operations/s
need to be executed.
Bibliography
[3] G. Cherubini, S. Ölçer, and G. Ungerboeck, “Adaptive analog equalization and receiver
front-end control for multilevel partial-response transmission over metallic cables”,
IEEE Trans. on Communications, vol. 44, pp. 675–685, June 1996.
[4] “Supplement to carrier sense multiple access with collision detection (CSMA/CD) ac-
cess method and physical layer specifications: physical layer specification for 100 Mb/s
operation on two pairs of Category 3 or better balanced twisted pair cable (100BASE-
T2, Clause 32)”, Standard IEEE 802.3y, IEEE, Mar. 1997.
[5] B. R. Petersen and D. D. Falconer, “Minimum mean-square equalization in cyclosta-
tionary and stationary interference–Analysis and subscriber line calculations”, IEEE
Journal on Selected Areas in Communications, vol. 9, pp. 931–940, Aug. 1991.
1274 Chapter 19. Design of high speed transmission systems
Figure 19.18 illustrates the interference situations considered here. Equalization by linear
forward filters only is assumed. Reception of 100BASE-T2 signals is disturbed either by
alien NEXT from another synchronous 100BASE-T2 transmitter or by cross-talk from a
single asynchronous source. Only one of these disturbances may be present. The symbol
sequences fakA R g and fakB R g denote the sequences transmitted by the remote 100BASE-
A0 B0
T2 transceiver, whereas fak T g and fak T g denote the sequences transmitted by an adjacent
synchronous 100BASE-T2 transmitter. The spectrum S. f / of the asynchronous source may
be aperiodic or exhibit a period different from 1=T . The functions “H . f /” represent the
spectral responses of the signal or cross-talk paths from the respective sources to the inputs
of the forward equalizer filters with transfer functions C A A . f / and C B A . f /. Because of
2=T sampling rate, these functions exhibit 2=T -periodicity. All signals and filter coefficients
are real-valued. It is therefore sufficient to consider only frequencies f and f 1=T , for
0 < f < 1=.2T /. We will concentrate on the signals arriving at decision point DPA; the
analysis for signals at DPB is similar.
Intersymbol-interference free reception of the symbol sequence fakA R g and the suppres-
sion of signal components stemming from fakB R g at DPA require
1 1
H Ac . f / C cA A . f / C H Ac f CAA f D1
T T
(19.20)
1 1
H Bc . f / C Bc A . f / C H Bc f CB A f D0
T T
B’ A’
(a) {a k T } {a k T } (b) S(f)
decimation
nxt nxt xt
HB’A (f) HA’A (f) HSA (f)
1 2
T T
* CAA (f)
c
HA (f) A
DPA {a k R }
~ {a A R }
= k
c B
DPB HB (f) {a k R }
Figure 19.18. Cross-talk disturbance by: (a) alien NEXT from another synchronous 100BASE-
T2 transmitter, (b) an asynchronous single source, for example, a 10BASE-T transmitter.
c 1997 IEEE.]
[From [2],
19.A. Interference suppression 1275
Alternatively, the additional conditions for the suppression of cross-talk caused by a single
asynchronous source become
HSxtA . f / C A A . f / C HSxtB . f / C B A . f / D 0
(19.22)
1 1 1 1
HSxtA f CAA f C HSxtB f CB A f D0
T T T T
Therefore in each case the interference is completely suppressed if for every frequency
in the interval 0 < f < 1=.2T / the transfer function values C A A ( f ), C A A ( f .1=T /),
C B A ( f ) and C B A . f .1=T // satisfy four linear equations. It will be highly unlikely that
the cross-talk responses are such that the coefficient matrix of these equations becomes
singular. Hence a solution will exist with high probability. In the absence of filter-length
constraints, the T =2-spaced coefficients of these filters can be adjusted to achieve these
transfer functions. For a practical implementation a trade-off between filter lengths and
achieved interference suppression has to be made.
Algorithms for Communications Systems and Their Applications.
Nevio Benvenuto and Giovanni Cherubini
Copyright 2002 John Wiley & Sons, Ltd. ISBN: 0-470-84389-6
Index