Beruflich Dokumente
Kultur Dokumente
Telecommunications
Networks
Anton imr
Jn Papaj Department of electronics
and multimedia telecommunications
CONTENTS
Preface ....................................................................................................................................... 5
1 Introduction ...................................................................................................................... 6
1.1 Mathematical models for communication channels .................................................... 8
1.2 Channel capacity for digital communication ............................................................ 10
1.2.1 Shannon Capacity and Interpretation ................................................................ 10
1.2.2 Hartley Channel Capacity .................................................................................. 12
1.2.3 Solved Problems ................................................................................................. 13
1.3 Noise in digital communication system ..................................................................... 15
1.3.1 White Noise ........................................................................................................ 17
1.3.2 Thermal Noise .................................................................................................... 18
TS
1.3.3 Solved Problems ................................................................................................. 19
1.4 Summary .................................................................................................................... 20
1.5 Exercises .................................................................................................................... 21
2 Signal and Spectra .......................................................................................................... 23
2.1 Deterministic and random signals ............................................................................. 23
TT
2.2 Periodic and nonperiodic signals .............................................................................. 23
2.3 Analog and discrete Signals ...................................................................................... 23
2.4 Energy and power Signals ......................................................................................... 23
2.5 Spectral Density ......................................................................................................... 25
2.5.1 Energy Spectral Density ..................................................................................... 25
M
1
3.2.2 Statistical Averages of Random Variables ......................................................... 37
3.2.3 Some Useful Probability Distributions .............................................................. 38
3.3 Stochastic processes .................................................................................................. 41
3.3.1 Stationary Stochastic Processes ......................................................................... 41
3.3.2 Statistical Averages ............................................................................................ 41
3.3.3 Power Density Spectrum .................................................................................... 43
3.3.4 Response of a Linear Time-Invariant System (channel) to a Random Input
Signal ............................................................................................................................ 43
3.3.5 Sampling Theorem for Band-Limited Stochastic Processes .............................. 44
3.3.6 Discrete-Time Stochastic Signals and Systems .................................................. 45
3.3.7 Cyclostationary Processes ................................................................................. 46
3.3.8 Solved Problems ................................................................................................. 47
TS
3.4 Summary .................................................................................................................... 50
3.5 Exercises .................................................................................................................... 52
4 Signal space concept ....................................................................................................... 55
4.1 Representation Of Band-Pass Signals And Systems .................................................. 55
4.1.1 Representation of Band-Pass Signals ................................................................ 55
TT
4.1.2 Representation of Band-Pass Stationary Stochastic Processes ......................... 58
4.2 Introduction of the Hilbert transform ........................................................................ 59
4.3 Different look at the Hilbert transform...................................................................... 59
4.3.1 Hilbert Transform, Analytic Signal and the Complex Envelope ........................ 59
M
2
5.2.2 Phase-modulated signal (PSK) .......................................................................... 85
5.2.3 Quadrature Amplitude Modulation (QAM)........................................................ 86
5.3 Multidimensional Signals .......................................................................................... 88
5.3.1 Orthogonal multidimensional signals ................................................................ 88
5.3.2 Linear Modulation with Memory ....................................................................... 92
5.3.3 Non-Linear Modulation Methods with Memory................................................. 95
5.4 Spectral Characteristic Of Digitally Modulated Signals ........................................ 101
5.4.1 Power Spectra of Linearly Modulated Signals ................................................ 101
5.4.2 Power Spectra of CPFSK and CPM Signals .................................................... 103
5.4.3 Solved Problems ............................................................................................... 106
5.5 Summary .................................................................................................................. 110
TS
5.6 Exercises .................................................................................................................. 110
6 Optimum Receivers for the AWGN Channel ............................................................ 113
6.1 Optimum Receivers For Signals Corrupted By Awgn ............................................. 113
6.1.1 Correlation demodulator.................................................................................. 114
6.1.2 Matched-Filter demodulator ............................................................................ 116
TT
6.1.3 The Optimum detector ...................................................................................... 118
6.1.4 The Maximum-Likelihood Sequence Detector ................................................. 120
6.2 Performance Of The Optimum Receiver For Memoryless Modulation .................. 123
6.2.1 Probability of Error for Binary Modulation .................................................... 123
6.2.2 Probability of Error for M-ary Orthogonal Signals ........................................ 126
M
3
7.5.1 Bandwidth Efficiency of MPSK and MFSK Modulation .................................. 151
7.5.2 Analogies Between Bandwidth-Efficiency and Error-Probability Planes ....... 152
7.6 Modulation And Coding Trade-Offs ........................................................................ 153
7.7 Defining, Designing, And Evaluating Digital Communication Systems ................. 154
7.7.1 M-ary Signaling................................................................................................ 154
7.7.2 Bandwidth-Limited Systems ............................................................................. 155
7.7.3 Power-Limited Systems .................................................................................... 156
7.7.4 Requirements for MPSK and MFSK Signaling ................................................ 157
7.7.5 Bandwidth-Limited Uncoded System Example ................................................ 158
7.7.6 Power-Limited Uncoded System Example ....................................................... 160
7.8 Solved Problems ...................................................................................................... 162
TS
7.9 Summary .................................................................................................................. 165
7.10 Exercise ................................................................................................................... 166
8 Why use error-correction coding ................................................................................ 167
8.1 Trade-Off 1: Error Performance versus Bandwidth ............................................... 167
8.2 Trade-Off 2: Power versus Bandwidth .................................................................... 168
TT
8.3 Coding Gain ............................................................................................................ 168
8.4 Trade-Off 3: Data Rate versus Bandwidth .............................................................. 168
8.5 Trade-Off 4: Capacity versus Bandwidth ................................................................ 169
8.6 Code Performance at Low Values of Eb/N0 ............................................................. 169
8.7 Solved problem ........................................................................................................ 170
M
4
PREFACE
Providing the theory of digital communication systems, this textbook prepares senior undergraduate
and graduate students for the engineering practices required in the real word.
With this textbook, students can understand how digital communication systems operate in practice,
learn how to design subsystems, and evaluate end-to-end performance.
The book contains many examples to help students achieve an understanding of the subject. The
problems are at the end of the each chapter follow closely the order of the sections.
The entire book is suitable for one semester course in digital communication.
All materials for teaching texts were drawn from sources listed in References.
TS
TT
M
KE
5
Chapter IV Signal Space Concept
With no loss of generality and for mathematical convenience, it is desirable to reduce all band-pass
signals and channels to equivalent low-pass signals and channels.
As a consequence, the results of the performance of the various modulation and demodulation
techniques presented in the subsequent chapters are independent of carrier frequencies and channel
frequency bands.
TS
4.1.1 Representation of Band-Pass Signals
Suppose that real valued signal s(t) has a frequency content concentrated in a narrow band of
frequencies in the vicinity of a frequency fc as show in Figure 4.1.
|S(f )|
TT
f
0
-fc fc
M
Our objective is to develop a mathematical representation of such signals. First, we consider a signal
that contains only positive frequencies in s(t). Such a signal may be expressed as
KE
2 (4.1)
where is the Fourrier transform of and is the unit step function. The equivalent time
domain expression is
2 (4.2)
2 (4.3)
Hence
55
Chapter IV Signal Space Concept
s t (4.4)
We define
(4.5)
The signal may by viewed as the output of the filter with impulse response
h t ; (4.6)
TS
when excited by the input signal . Such a filter is called a Hilbert transformer. The frequency
response of this filter is simply
H f
(4.7)
TT
We observe that | | 1 and that the phase response for 0 and for
0. Therefore, this filter is basically a 90 phase shifter for all frequencies in the input signal.
(4.8)
(4.9)
KE
or
(4.10)
(4.11)
2 2 (4.12)
2 2 (4.13)
The expression (4.12) is the desired form for the representation of a band-pass signal. The low-
frequency signal components and may be viewed as amplitude modulation impressed on the
carrier components 2 and 2 . Since these carrier components are in phase quadrature,
and are called the quadrature components of the band-pass signal .
56
Chapter IV Signal Space Concept
(4.14)
(4.15)
where
(4.16)
(4.17)
Then
2 (4.18)
TS
Therefore, 4.12, 4.14 and 4.18 are equivalent representation of band-pass signals. The Fourier
transform of is
(4.19)
(4.21)
This is the basic relationship between the spectrum of the real band-pass signal and the
spectrumof the equivalent low-pass signal .
KE
(4.22)
When the identity in Equation 4. 20 is used in Equation 4. 22, we obtain the following results:
| | | |cos 4 2 (4.23)
Consider the second integral in Equation 4.23. Since the signal narrow-band, the real envelope
| |or, equivalently, varies slowly relative to the rapid variations exhibited by the
cosine function. A graphical illustration of the integrand in the second integral of Equation 4.23 is
shown in Figure 4.2. The value of the integral is just the net area under the cosine function modulated
by . Since the modulating waveform varies slowly relative to the cosine function, the net
area contributed by the second integral is very small relative to the value of the first integral in
Equation 4.23 and, hence, it can be neglected. Thus, for all practical purposes, the energy in the band-
pass signal , expressed in terms of the equivalent low-pass signal is
57
Chapter IV Signal Space Concept
| | (4.24)
a2(t)
cos[4fct+2(t)]
TS
4.1.2 Representation of Band-Pass Stationary Stochastic Processes
Suppose that is a sample function of a wide-sense stationary stochastic process with zero mean
and power spectral density .The power spectral density is assumed to be zero outside of an
TT
interval of frequencies centered around . The stochastic process is said to be a narrowband
band-pass process if the width of the spectral density is much smaller than . Under this condition, a
sample function of can be represented by any of the three equivalent forms
t cos 2 (4.25)
M
(4.27)
where is the envelope and is the phase of the real-valued signal, and are the
KE
Let us consider 4.26 in more detail. First, we observe that if ) is zero mean, then and
must also have zero mean values. In addition, the stationarity of n(t) implies that the autocorrelation
and cross-correlation functions of x(t) and y(t) satisfy the following properties:
(4.28)
(4.29)
(4.30)
The power density spectrum of the stochastic process is the Fourier transform of
58
Chapter IV Signal Space Concept
(4.31)
In 1743 a famous Swiss mathematician named Leonard Euler (1707-1783) derived the formula
cos (4.32)
150 years later the physicist Arthur E: Kennelly and the scientist Charles P: Steinmetz used this
TS
formula to introduce the complex notation of harmonic wave forms in electrical engineering, that is
cos (4.33)
Later on, in the beginning of the 20th century, the German scientist David Hilbert (1862-1943) finally
showed that the function is the Hilbert transform cos . This gave us the 2 phase-shift
operator which is a basic property of the Hilbert transform.
TT
4.3 DIFFERENT LOOK AT THE HILBERT TRANSFORM
A real function and its Hilbert transform are related to each other in such a way that they
together create a so called strong analytic signal. The strong analytic signal can be written with an
amplitude and a phase where the derivative of the phase can be identified as the instantaneous
frequency. The Fourier transform of the strong analytic signal gives us a one-sided spectrum in the
M
frequency domain. It is not hard to see that a function and its Hilbert transform also are orthogonal.
This orthogonality is not always realized in applications because of truncations in numerical
calculations. However, a function and its Hilbert transform has the same energy and therefore the
energy can be used to measure the calculation accuracy of the approximated Hilbert transform. The
Hilbert transform defined in the time domain is a convolution between the Hilbert transformer 1
KE
and a function .
59
Chapter IV Signal Space Concept
Low-pass
X Filter 1/2 I
cos fct
Oscillator
g(t)
Hilbert
transformer
sin fct
Low-pass
X Filter -1/2 Q
Hilbert Transform is not a particularly complex concept and can be much better understood if we take
an intuitive approach first before delving into its formula which is related to convolution and is hard to
grasp. The following diagram that is often seen in text books describing modulation gives us a clue as
TS
to what a Hilbert Transform does.
The role of Hilbert transform as we can guess here is to take the carrier which is a cosine wave and
create a sine wave out of it. So lets take a closer look at a cosine wave to see how this is done by the
Hilbert transformer. Figure 4.4 a) shows the amplitude and the phase spectrum of a cosine wave. Now
recall that the Fourier Series is written as
(4.34)
TT
Where
and
And and are the spectral amplitudes of cosine and sine waves. Now take a look at the phase
spectrum. The phase spectrum is computed by
M
(4.35)
Cosine wave has no sine spectral content, so is zero. The phase calculated is 90 for both positive
and negative frequency from above formula. The wave has two spectral components each of
KE
magnitude 1/2 , both positive and lying in the real plane. (the real plane is described as that passing
vertically (R-V plane) and the Imaginary plane as one horizontally (R-I plane) through the Imaginary
axis).
60
Chapter IV Signal Space Concept
A/2 f
f -f f
Rea
l
v(t) [V]
[V]
t inary
Q- I mag
Q+ A/2 A/2
A/2
f A/2 f
Rea -f f
l
TS
b)
Figure 4.4 b) shows the same two spectrums for a sine wave. The sine wave phase is not symmetric
because the amplitude spectrum is not symmetric. The quantity is zero and has either a positive
or negative value. The phase is 90 for the positive frequency and 90 for the negative frequency.
TT
Now we wish to convert the cosine wave to a sine wave. There are two ways of doing that, one in time
domain and the other in frequency domain.
amplitudes are both positive and lie in the real plane. The sine wave has spectral components that lie
in the Imaginary plane and are of opposite sign.
To turn cosine into sine, as shown in Figure 4.5 below, we need to rotate the negative frequency
KE
component of the cosine by 90 and the positive frequency component by 90. We will need to
rotate the phasor by 90 or in other words multiply it by -j. We also need to rotate the
phasor by 90 or multiply it by j.
[V]
+90
in ary
A/2 Imag
Q-
-90
Q+
A/2
Rea
l
We can describe this transformation process called the Hilbert Transform as follows:
61
Chapter IV Signal Space Concept
All negative frequencies of a signal get a phase shift and all positive frequencies get a
phase shift.
If we put a cosine wave through this transformer, we get a sine wave. This phase rotation process is
true for all signals put through the Hilbert transform and not just the cosine.
For any signal g(t), its Hilbert Transform has the following property
0
(4.36)
0
(Putting a little hat over the capital letter representing the time domain signal is the typical way a
Hilbert Transform is written.)
A sine wave through a Hilbert Transformer will come out as a negative cosine. A negative cosine will
come out a negative sine wave and one more transformation will return it to the original cosine wave,
each time its phase being changed by 90.
TS
cos sin cos sin cos
For this reason Hilbert transform is also called a quadrature filter. We can draw this filter as shown
below in Figure 4.6.
[]
TT
+90
inary
Imag
Rea
l
M
-90
Figure 4.6 Hilbert Transform shifts the phase of positive frequencies by -90 and negative frequencies by +90.
So here are two things we can say about the Hilbert Transform.
KE
1. It is a peculiar sort of filter that changes the phase of the spectral components depending
on the sign of their frequency.
2. It only effects the phase of the signal. It has no effect on the amplitude at all.
(4.37)
Another way to write this definition is to recognize that Hilbert Transform is also the convolution of
function with the signal s(t). So we can write the above equation as
(4.38)
62
Chapter IV Signal Space Concept
Achieving a Hilbert Transform in time domain means convolving the signal with the function . Why
the function , what is its significance? Lets look at the Fourier Transform of this function. What
does that tell us? Given in Equation 4.39, the transform looks a lot like the Hilbert transform we talked
about before.
(4.39)
The term sgn in Equation 4.39 above , called signum is simpler than it seems. Here is the way we
could have written it which would have been more understandable.
0
= (4.40)
0
In Figure 4.7 we show the signum function and its decomposition into two familiar functions.
TS
2
1
f f f
-1
TT
sgn(f)=2u(f)-1 = 2u(f) - f =1
Figure 4.7 Signum Function decomposed into a unit function and a constant
For shortcut, writing sgn is useful but it is better if it is understood as a sum of the above two much
simpler functions. (We will use this relationship later.)
M
2 1 (4.41)
We see in Figure 4.8 that although is a real function, is has a Fourier transform that lies strictly in
KE
the imaginary plane. Do you recall what this means in terms of Fourier Series coefficients? What does
it tell us about a function if it has no real components in its Fourier transform? It says that this function
can be represented completely by a sum of sine waves. It has no cosine component at all.
[]
[V]
inary
inary Imag
Imag
A/2
Rea
Tim l
e
In Figure 4.9, we see a function composed of a sum of 50 sine waves. We see the similarity of this
function with that of . Now you can see that although the function looks nothing at all a sinusoid,
we can still approximate it with a sum of sinusoids.
63
Chapter IV Signal Space Concept
The function gives us a spectrum that explains the Hilbert Transform in time domain, albeit
this way of looking at the Hilbert Transform is indeed very hard to grasp.
We limit our discussion of Hilbert transform to Frequency domain due to this difficulty.
TS
Figure 4.9 Aximating function with a sum of 50 sine wave
We can add the following to our list of observations about the Hilbert Transform.
3. The signal and its Hilbert Transform are orthogonal. This is because by rotating the signal
90 we have now made it orthogonal to the original signal, that being the definition of
orthogonality.
TT
4. The signal and its Hilbert Transform have identical energy because phase shift do not
change the energy of the signal only amplitude changes can do that.
an Analytic signal. Analytic signals are used in Double and Single side-band processing (about SSB
and DSB later) as well as in creating the I and Q components of a real signal.
(4.42)
An analytic signal is a complex signal created by taking a signal and then adding in quadrature its
Hilbert Transform. It is also called the pre-envelope of the real signal.
Substitute cos for in Equation 4.42, knowing that its Hilbert transform is a sine, we get
The analytic function of a cosine is the now familiar phasor or the complex exponential, .
Now substitute sin for in Equation 4.42, knowing that its Hilbert transform is a , we get
once again a complex exponential.
64
Chapter IV Signal Space Concept
Do you remember what the spectrum of a complex exponential looks like? To remind you, I repeat
here the figure.
v(t) [V]
sqrt(2)*A sqrt(2)*A
t inary
I mag
f Rea
l
We can see from the figure above, that whereas the spectrum of a sine and cosine spans both the
negative and positive frequencies, the spectrum of the analytic signal, in this case the complex
TS
exponential, is in fact present only in the positive domain. This is true for both sine and cosine and in
fact for all real signals.
Restating the results: the Analytic signal for both and sine and cosine is the complex exponential.
Even though both sine and cosine have a two sided spectrum as we see in figures above, the
complex exponential which is the analytic signal of a sinusoid has a one-sided spectrum.
TT
We can generalize from this: An analytic signal (composed of a real signal and its Hilbert transform)
has a spectrum that exists only in the positive frequency domain.
This signal has components only in the negative frequencies and can be used to separate out the lower
KE
side-bands.
Now back to the analytic signal. Lets extend our understanding by taking Fourier Transform of both
sides of Equation 4.42. We get
(4.45)
The first term is the Fourier transform of the signal , and the second term is the inverse Hilbert
Transform. We can rewrite by use of property 2 1 Equation 4.45 as
2 (4.46)
2 0
0 0 (4.47)
0 0
65
Chapter IV Signal Space Concept
This is a very important result and is applicable to both lowpass and modulated signals. For modulated
or bandpass signals, its net effect is to translate the signal down to baseband, double the spectral
magnitudes and then chop-off all negative components.
Complex Envelope
(4.48)
We now see clearly that the Complex Envelope is just the frequency shifted version of the analytic
signal. Recognizing that multiplication with the complex exponential in time domain results in
frequency shift in the Frequency domain, using the Fourier Transform results for the analytic signal
above, we get
TS
2 0
0 0 (4.49)
0 0
So here is what we have been trying to get at all this time. This result says that the Fourier Transform
of the analytic signal is just the one-sided spectrum. The carrier signal drops out entirely and the
spectrum is no longer symmetrical.
TT
This property is very valuable in simulation. We no longer have to do simulation at carrier frequencies
but only at the highest frequency of the baseband signal. The process applies equally to other
transformation such as filters etc. which are also down shifted. It even works when non-linearities are
present in the channel and result in additional frequencies.
M
There are other uses of complex representation which we will discuss as we explore these topics
however its main use is in simulation.
Lets do an example. Here is a real baseband signal. (I have left out the factor 2 for purposes of
simplification)
4 2 6 3
v(t)
66
Chapter IV Signal Space Concept
The spectrum of this signal is shown below, both its individual spectral amplitudes and its magnitude
spectrum. The magnitude spectrum shows one spectral component of magnitude 2 at f = 2 and -2 and
an another one of magnitude 3 at f = 3 and -3.
ima
gin
ary
real
23
-3 -2
ency
Frequ
Spectral amplitudes The Magnitude Spectrum
TS
Now lets multiply it with a carrier signal of 100 to modulate it and to create a bandpass signal,
cos 100
0
M
-10
0 0.2 0.4 0.6 0.8 1
Lets take the Hilbert Transform of this signal. But before we do that we need to simplify the above so
we only have sinusoids and not their products. This step will make it easy to compute the Hilbert
Transform. By using these trigonometric relationships,
sin sin
sin cos
2
cos cos
cos cos
2
67
Chapter IV Signal Space Concept
Now create the analytic signal by adding the original signal and its Hilbert Transform.
TS
Recognizing that each pair of terms is the Eulers representation of a sinusoid, we can now rewrite the
analytic signal as
4 cos 2 6 sin 3
But wait a minute, isnt this the original signal and the carrier written in the complex exponential? So
why all the calculations just to get the original signal back?
TT
Now lets take the Fourier Transform of the analytic signal and the complex envelope we have
computed to show the real advantage of the complex envelope representation of signals.
6 6
4 4
M
Fig.4.14 The Magnitude Spectrum of the Complex Envelope vs. The Analytic Signal
Although this was a passband signal, we see that its complex envelope spectrum is centered around
zero and not the carrier frequency. Also the spectral components are double those in Figure 4.12 and
they are only on the positive side. If you think the result looks suspiciously like a one-sided Fourier
transform, then you would be right.
We do all this because of something Nyquist said. He said that in order to properly reconstruct a
signal, any signal, baseband or passband, needs to be sampled at least two times its highest spectral
frequency. That requires that we sample at frequency of 200.
But we just showed that if we take a modulated signal and go through all this math and create an
analytic signal (which by the way does not require any knowledge of the original signal) we can
separate the information signal the baseband signal s(t)) from the carrier. We do this by dividing the
analytic signal by the carrier. Now all we have left is the baseband signal. All processing can be done
at a sampling frequency which is 6 (two times the maximum frequency of 3) instead of 200.
68
Chapter IV Signal Space Concept
The point here is that this mathematical concept help us get around the signal processing requirements
by Nyquist for sampling of bandpass systems.
The complex envelope is useful primarily for passband signals. In a lowpass signal the complex
envelope of the signal is the signal itself. But in passband signal, the complex envelope representation
allows us to easily separate out the carrier.
4 cos 2 6 sin 3
4 cos 2 6 sin 3
We see the advantage of this form right away. The complex envelope is just the low-pass part of the
analytic signal. The analytic signal low-pass signal has been multiplied by the complex exponential at
the carrier frequency. The Fourier transform of this representation will lead to the signal translated
TS
back down the baseband (and doubled with no negative frequency components) making it possible to
get around the Nyquist sampling requirement and reduce computational load.
(4.50)
. (4.51)
. 0 (4.52)
(4.53)
A set of m vectors is said to be orthonormal if the vectors are orthogonal and each vector has a unit
norm. A set of m vectors is said to be linearly independent if no one vector can be represented as a
linear combination of the remaining vectors.
2 . (4.54)
69
Chapter IV Signal Space Concept
(4.55)
Finaly, let us review the Gram-Schmidt procedure for constructing a set of orthonormal vectors from a
set of n-dimensional vectors ; 1 m. We begin by arbitrarily selecting a vector from the set, say
. By normalizing its length, we obtain the first vector, say
v1
u1 v1
(4.56)
Next, we may select v2 and, first, substract the projection of v2 onto u1 . Thus, we obtain
v2 v2 . 1 1 (4.57)
TS
The procedure continues by selecting v3 and subtracting the projections of v3 into 1 and 2. Thus, we
have
v3 v3 . 1 1 v3 . 2 2 (4.59)
, (4.61)
KE
| | (4.62)
(4.63)
Cauchy-Schwarz inequality is
| | | | (4.64)
70
Chapter IV Signal Space Concept
(4.65)
There exists a set of function , 1,2, , that are orthonormal. We may approximate the
signal by a weighted linear combination of these functions, i.e.,
(4.66)
(4.67)
Let us select the coefficients so as to minimize the energy of the approximation error.
(4.68)
TS
The optimum coefficients in the series expansion of may be found by differentiating Equation
4.64 with respect to each of the coefficients and setting the first derivatives to zero.
(4.69)
TT
When every finite energy signal can be represented by a series expansion of the form (4.69) for which
0, the set of orthonormal functions is said to be complete.
We have a set of finite energy signal waveforms , 1,2, , and we wish to construct a set
of orthonormal waveforms. The first orthonormal waveform is simply constructed as
s1
f1 t (4.70)
E1
KE
The second waveform is constructed from , by first computing the projection of , onto
, which is
c12 dt (4.71)
Then
2 c12 1 (4.72)
And again
f2 t (4.73)
E
in general
fk t (4.74)
E
71
Chapter IV Signal Space Concept
where
k (4.75)
And
cik t dt (4.76)
Thus, the orthogonalization process is continued until all the M signal waveforms have been
exhausted and orthonormal waveforms have been constructed. The dimensionality N of the
signal space will be equal to M if all the signal waveforms are linearly independent, i.e., none of the
signal waveform is a linear combination of the other signal waveforms.
Once we have constructed the set of orthonormal waveforms , we can express the M signals
as linear combinations of the .
TS
(4.77)
and
(4.78)
TT
Based on the expression in Equation 4.77, each signal may be represented by the vector
(4.79)
The energy in the kth signal is simply the square of the length of the vector or, equivalently, the square
of the Euclidean distance from the origin to the point in the N-dimensional space. Thus, any signal can
M
be represented geometrically as a point in the signal space spanned by the orthonormal functions
.
The orthogonal expansions described above were developed for real-valued signal waveforms.
Finally, let us consider the case in which the signal waveforms are band-pass and represented as
72
Chapter IV Signal Space Concept
(4.80)
where denote the equivalent low-pass signals. Signal energy may be expressed either in terms
of or , as
| | (4.81)
The similarity between any pair of signal waveforms, say and is measured by the
normalized cross correlation
(4.82)
(4.83)
TS
then
(4.84)
or, equivalently
. .
TT
(4.85)
The cross-correlation coefficients between pairs of signal waveforms or signal vectors comprise one
set of parameters that characterize the similarity of a set of signals. Another related parameter is the
Euclidean distance between a pair of signals
M
={ 2 (4.86)
KE
2 1 (4.87)
Thus, the Euclidean distance is an alternative measure of the similarity (or dissimilarity) of the set
of signal waveforms or the corresponding signal vectors.
In the following section, we describe digitally modulated signals and make use of the signal space
representation for such signals. We shall observe that digitally modulated signals, which are classified
as linear, are conveniently expanded in terms of two orthonormal basis functions of the form
cos 2
sin 2 (4.88)
73
Chapter IV Signal Space Concept
(4.89)
Solution
TS
sin 2 sin 2
cos 2 E cos 2 2 2
2 2
But
E cos 2 2 2 cos 2 2 2
M
1
cos 2 2 2 0
2
KE
Hence
cos 2
2
Problem 2
Let us apply the Gram-Schmidt procedure to the set of four waveforms illustrated in Figure.
1 1 1 1
2 t 1 2 t 1 2 3 t -1 1 2 3 t
Solution
74
Chapter IV Signal Space Concept
s1 s1
f1 t
E1 2
s2 s2
f t
E2 2
To obtain f t , we compute c13 , c23 , which are c13 2 and c23 0. Thus,
1, 2 3
s t -2f1 t -0f2 t
0,
Since has unit energy, it follows that f t . In determining f t , we find that c14
2, c24 2, c34 1. Hence
TS
s t 2f1 t -0f2 t -1f3 t 0
Problem 3
Let us obtain the vector representation of the four signals shown from Problem 1.
M
Solution
Since the dimensionality of the signal space is N=3, each signal is described by three components. The
signal is characterized by the vector s t 2, 0,0 . Similarly, the signals s t , s t and s t are
KE
f2
s2
f1
s1
s4
f2 s3
75
Chapter IV Signal Space Concept
Problem 4
Determine the correlation coefficient among the four signal waveforms shown in Figure
in Problem 2, and the corresponding Euclidean distances.
Solution
TS
For the signals in problem:
2, 2, 3, 3
2 2
0, , ,
6 6
TT
0, 0,
1
,
3
and
M
2 2
2, 2 3 26 1, 2 3 26 3
6 6
2 3 5, 5
KE
1
3 3 2 22
3
Problem 5
Carry out the Gram-Schmidt orthogonalization of the signals in Problem1 in the order
, , , , and, thus, obtain a set of orthonormal functions. Then, determine the vector
representations of the signals and determine the signal energies.
Solution
76
Chapter IV Signal Space Concept
1
s4 s4 ,0 3
f4 t 3
E1 3 0,
1
c43 s3 f4
3
And,
2
,0 2
3
s t -c43 f4 t 4
,2 3
3
TS
0,
Hence:
1
TT
,0 2
6
f3 t 2
E3 ,2 3
6
0,
c42 s2 f4 0,
KE
c32 s2 f3 0,
And
1
,0 1
2
f2 t 1
E2 ,1 2
2
0,
Where E f 2
2
c41 s1 f4 ,
3
77
Chapter IV Signal Space Concept
2
c31 s1 f3 ,
6
c21 s1 f4 0,
Hence:
f1 t 0
E1
The last results is expected, since the dimensionality of the vector space generated by these signals is
3. Based on the basis functions f2 t ,f t ,f t the basis representation of the signals is:
TS
s4 0,0, 3 E4 3
8 1
s3 0, , E3 3
3 3
2, 0,0 E2
TT
s 2
2 2
s1 , ,0 E1 2
6 3
4.4.6 Summary
M
Suppose we further impose constraint that the complex baseband signal s(t) is
approximately bandlimited to /2 Hz (and time-limited to , , say), and impose no
other constraints on the signal space. Then the appropriate basis functions for the signal
KE
space are the Prolate Spheroidal Wave Functions (PSWFs). See the papers by Slepian,
Landau and Pollack for a description of PSWFs. This basis is optimum in the sense that,
although there are a countably infinite number of functions in the set, at most WT of these
are enough to capture most of the energy for any signal in this signal space. So the signal
space of complex signals that are approximately bandlimited to /2 Hz and time limited to
, is approximately finite dimensional.
More typically in communication systems, s(t) is one of M possible signals
, , . If we let , , , then dim .
The signal can then be considered to belong to the n-dimensional space S. One can find
an orthonormal basis for S by the standard Gram-Schmidt procedure.
The energy of a signal is denoted by E and is given by
The correlation between two signals and , which is a measure of the similarity
between these two signals, is given by
78
Chapter IV Signal Space Concept
. .
The distance between two signals and , which is also a measure of the similarity
between these two signals, is given by
2 1
4.5 EXERCISES
TS
1. Prove the following properties of Hilbert transforms:
a. If ,
b. If ,
c. If cos , sin
d. If sin , cos
2. Find a set of orthonormal basis functions for the signals given below that are defined on the
TT
interval 1 1:
t
M
3. Use the GramSchmidt procedure to find an orthonormal basis for the signal set given
KE
below. Express each signal in terms of the orthonormal basis set found.
1,0 2
cos ,0 2
sin ,0 2
sin ,0 2
79
Chapter IV Signal Space Concept
1 1
t t t
0 1 2 3 0 1 2 3 0 1 2 3
5. Determine the correlation coefficient among the signals shown in Figure, and the
corresponding Euclidean distances.
f2(t) f2(t)
2 s2
1 s1 1 s1
f1(t) f1(t)
0 1 0 1
s2
TS
-2
f2(t) f2(t)
s2 s1
1 s1 1
f1(t)
0 f1(t)
-1 0 -1
TT
1 1
s2
Suppose that s(t) is either a real- or complex-valued signal that is represented as a linear
M
KE
Where
1,
0,
Determine the expressions for the coefficients , in the expansion , that minimize
the energy
| |
80
Chapter IV Signal Space Concept
TS
1, 0 1
1, 1 3
1, 3 4
9. Determine a set of orthonormal functions for the four signals shown in Figure
81
Department of electronics
and multimedia telecommunications