Beruflich Dokumente
Kultur Dokumente
José De Doná
June 6, 2015
Abstract
These notes are based on the excellent classic textbook by B. P. Lathi: Signal Processing & Linear Sys-
tems, Oxford University Press 1998 (or the International Edition 2009/2010). There is no claim to originality
nor should they be viewed as a substitute to reading the textbook. On the contrary, they are intended as a brief
summary and a study-guide to be used in conjunction with reading the main text. (These notes may be regarded
as the analogue of taking painkillers, while reading the textbook would be the equivalent to the real solution to
the problem—undergoing root canal therapy.)
1
Example of analogue filter
One of the most fundamental topics that are !"#$%&'()*(+,#&)-(./&0'1(
covered in this Signal Processing course is Filter Design.
In order to obtain a useful filter we will now
make a modification to the circuit of Fig-
ure 1 by introducing an extra element, a ca-
pacitor C, as shown in Figure 3. Rule 2 3 2
above and Kirchhoff’s current law give:
i1 (t) = i2 (t) + i3 (t). (4)
Recalling that Rule 1 above implies that
v− (t) = v+ (t) = 0, we then have that the 3 2 3
voltage across the capacitor C is vc (t) = 3
v− (t) − vo (t) = −vo (t), and hence the cur-
rent through the capacitor is:
2 2
dvc (t) dvo (t) Figure 3: Analog filter.
i3 (t) = C = −C . (5)
dt dt
Since Equations (2) still apply, substituting i1 (t) and i2 (t) from those equations and i3 (t) from Equation (5) into
Equation (4) and rearranging we obtain:
dvo (t) 1 1
+ vo (t) = − vi (t). (6)
dt CRf CRi
1
and we can see that the slope starts to ‘back off’ from the initial value − and becomes smaller in magnitude.
CRi
This situation continues until vo (t) reaches a value such that:
1 1
− − vo = 0,
CRi CRf
Rf dvo (t)
that is, vo = − , when = 0, and the circuit attains a new equilibrium (see Figure 4). Comparing
Ri dt
Figure 4 with Figure 2 we can see that the introduction of the capacitor C makes the circuit function as a lowpass
2
filter, since it ‘smoothes out’ the sharp edge of the step signal (in other words, it opposes sudden changes, i.e.,
high frequencies).
We will now solve the differential equation (6). First apply the integrating factor et/CRf to obtain
et/CRf
t/CRf dvo (t) 1
e + vo (t) = − vi (t),
dt CRf CRi
d et/CRf
vo (t)et/CRf = −
which can be rewritten using the product rule as vi (t). Integrating both sides,
dt CRi
t Z t
d 1
Z
τ /CRf
eτ /CRf vi (τ )dτ.
vo (τ )e dτ = −
0 dτ CRi 0
Solving the left-hand side integral and considering the initial condition vo (0) = 0 we obtain,
Z t
t/CRf 0/CRf t/CRf 1
vo (t)e − vo (0) e = vo (t)e =− eτ /CRf vi (τ )dτ,
| {z } CRi 0
=0
We can immediately realise that the Figure 6: Impulse 1as limit of a1gate function.
R2 = = = 11, 893 Ω,
aC2 178.9 × 0.47 × 10−6
function δ(t) has unusual features. 2a 2 × 178.9
= R3an
R1 have = area = = 770 Ω
For example, how can a function that is zero almost everywhere bC1 equal988, 300to !!!!!"#$%&'!!!!!
one?
× 0.47 × 10We−6 can imagine
this function as the limit of a gate function, as shown in Figure 6. Note that the area underneath each of the
functions in the limiting process of Figure 6 is(2) For the first-order section K = 4 Ωc = 289.5:
always equal to 1 and hence the limit has area 1 (and, in particular,
at t = 0 it shoots to infinity!). Strictly speaking, δ(t) is not a function 1
R1 = but something
=
1known as a ‘generalised
= 7, 349 Ω,
function’, which is rather defined by its effect on other functions instead of by its value at ×every
Ω c C 289.5 × 0.47 10−6 instant of time.
Let R2 = 1, 500 Ω,
In particular, the effect on other functions is characterised by the following property.
R = (K − 1)R = 3 × 1, 500 = 4, 500 Ω
3 2
!!!!!"#$%&'!!!!!
and the design is complete. The final circuit is shown in Fig. 13.
3
10
Sampling property of the unit impulse function
Let f (t) be a function and consider the following integral:
Z ∞ Z ∞ Z ∞
f (τ )δ(τ )dτ = f (0) δ(τ ) dτ = f (0) δ(τ )dτ = f (0),
−∞ −∞ | {z } −∞
since δ(τ )=0 for τ 6=0 | {z }
=1
So, the area under the product f (t)δ(t) equals the value of the function f (t) at the instant when the impulse is
located.
Consider now a shifted version of the impulse, δ(t − τ ) (here we
think of δ(t − τ ) as a function of τ shifted by t). Notice first, using (8)
and making the change of variables t − τ = x, dτ = −dx, that:
Z ∞ Z −∞ Z ∞
δ(t − τ )dτ = δ(x)(−dx) = δ(x)dx = 1.
−∞ ∞ −∞
Input
approximated
as a sum of
rectangular
pulses
Impulse
response
Delayed
impulse
response
Each pulse in
has strength
(area) equal to
Total response
is obtained by
summing all the
components
(superposition)
As shown in Figure11, the output is obtained as the sum (superposition) of the responses to the individual
5
components:
∞
X Z ∞
y(t) = lim f (n∆τ )h(t − n∆τ )∆τ = f (τ )h(t − τ )dτ,
∆τ →0 −∞
n=−∞
that tells us that knowing the impulse response h(t) of a linear system we can determine the response y(t) to any
input f (t) (that is, provided we can solve Equation (12)).
Properties of convolution
1. Commutative: f1 (t) ∗ f2 (t) = f2 (t) ∗ f1 (t).
2. Distributive: f1 (t) ∗ [f2 (t) + f3 (t)] = f1 (t) ∗ f2 (t) + f1 (t) ∗ f3 (t).
3. Associative: f1 (t) ∗ [f2 (t) ∗ f3 (t)] = [f1 (t) ∗ f2 (t)] ∗ f3 (t).
4. Shift property: if f1 (t) ∗ f2 (t) = c(t), then f1R(t) ∗ f2 (t − T ) = c(t − T ).
∞
5. Convolution with an impulse: f (t) ∗ δ(t) = −∞ f (τ )δ(t − τ )dτ = f (t) (see Equation (10) above).
6. Width property: if the durations (widths) of f1 (t) and f2 (t) are T1 and T2 respectively, then the duration
(width) of f1 (t) ∗ f2 (t) is T1 + T2 (see Figure12).
Causality
In general, we will consider inputs to a system that start at t = 0 and that are zero before that:
If the system is causal, its response to an impulse δ(t) (which is located at t = 0) cannot begin before t = 0.
Therefore, for a causal system:
h(t) = 0 for t < 0.
Consider again the output of a filter given in (12) above. We have:
f (τ ) = 0 for τ < 0,
and,
h(t − τ ) = 0 for t − τ < 0, i.e., for τ > t.
6
Hence, the product f (τ )h(t − τ ) is 0 for τ < 0 and τ > t, and we can write the output equation (12) of a causal
filter as: Z t
f (τ )h(t − τ )dτ for t ≥ 0,
y(t) = f (t) ∗ h(t) =
0
0 for t < 0.
This result shows that if the impulse response h(t) is causal, then the filter response is also causal.
One critical point is to realise that this integral is performed with respect to τ , so that t is a parameter (like a
constant). We plot f (τ ) (that is easy, just substitute t → τ in f (t)). Next we flip h(τ ) to obtain h(−τ ) and then
we shift it (slide) by t time units: h(t − τ ), and evaluate the area under the product of f (τ )h(t − τ ) to obtain
the value of y(t). Then we repeat the process over and over again for different values of t. This process will be
illustrated in the following section for the response of the analog filter shown in Figure 3.
7
Figure 14: Graphical convolution.
We can see, in particular, from the previous graphical analysis that the final value of vo (t) is the total area
under h(τ ) from τ = 0 to τ = ∞. Indeed,
Z ∞
e−τ /CRf CRf −τ /CRf τ =∞ Rf
vo, final = − dτ = e τ =0
=− ,
0 CRi CRi Ri
and we conclude that the DC gain of a linear filter (or any other linear system) is the area underneath its impulse
response.
We say that f (t) and F (ω) are a Fourier transform pair, and write it as f (t) ⇐⇒ F (ω) or, equivalently, F (ω) =
F[f (t)] and f (t) = F−1 [F (ω)].
The Fourier representation is a way of expressing a signal in terms of everlasting sinusoidals (or exponentials).
The Fourier spectrum of a signal indicates the relative amplitudes and phases of the sinusoids that are required to
synthesise that signal. We can see from (14) that the contribution of the exponential function ejωt within a band
8
1
dω is equal to F (ω)dω = F (ω)dF, where dF is the bandwidth in hertz. (Recall that ω = 2πF, where ω is
2π
angular frequency in rad/sec and F is frequency in hertz = cycles/sec.) Clearly, F (ω) is the ‘spectral density’
per unit bandwidth (in hertz). F (ω) is commonly referred to as the spectrum (of Fourier spectrum) of f (t).
Using Euler’s formula ejθ = cos θ + j sin θ in (13) we obtain:
Z ∞ Z ∞ Z ∞
F (ω) = f (t) (cos ωt − j sin ωt) dt = f (t) cos ωt dt − j f (t) sin ωt dt,
−∞ −∞ −∞
and hence we can see that, in general, F (ω) is a complex function, and we have amplitude |F (ω)| and phase
F (ω) spectra:
F (ω) = |F (ω)|ej F (ω) . (15)
If f (t) is a real function we can see, by comparing the expressions for F (ω) and F (−ω) above, that:
|F (−ω)| = |F (ω)|,
(16)
F (−ω) = − F (ω),
so that the amplitude is an even function and the phase is an odd function of ω. In other words, the spectrum for
negative frequencies, F (−ω), is the complex conjugate of the spectrum for positive frequencies, F (ω); denoted
F (−ω) = [F (ω)]∗ . For this reason, this is known as the conjugate symmetry property of the Fourier transform
and is valid for any real function f (t).
Example 1
u(t)
Consider the function f (t) = e−at u(t) shown on the left-hand side of Fig-
ure 16 (below, where u(t) is the unit step function shown in Figure 15,
1 for t ≥ 0,
u(t) =
0 for t < 0.
!
Direct computation from (13) yields: Figure 15: Unit step function.
Z ∞ Z 0 Z ∞
F (ω) = e−at u(t)e−jωt dt = e−at × 0 × e−jωt dt + e−at × 1 × e−jωt dt
Z−∞∞ Z ∞ −∞ 0
1 ∞
= e−at e−jωt dt = e−(a+jω)t dt = − e−(a+jω)t 0 .
0 0 a + jω
Notice that lim e−(a+jω)t = lim e−at −jωt
| e {z } = 0 if a > 0.
t→∞ t→∞
always magnitude = 1
Therefore we have:
1
F (ω) = , for a > 0.
a + jω
Computing the amplitude and phase of F (ω):
1 −1 ω
|F (ω)| = √ , F (jω) = − tan ,
a2 + ω 2 a
which are shown on the right-hand side of Fig-
Figure 16: Fourier transform pair for Example 1. ure 16.
9
Existence of the Fourier transform
We saw in Example 1 above that the Fourier transform of f (t) = e−at u(t) does not exist if a < 0. Clearly, not all
signals are Fourier transformable. The existence of the Fourier transform is assured if f (t) satisfies the Dirichlet
conditions, the first of which is:
Z ∞ Z ∞
−jωt
|F (ω)| ≤ |f (t)e |dt = |f (t)|dt < ∞ .
−∞ −∞
| {z }
First Dirichlet condition
The second Dirichlet condition is that, in any finite interval, f (t) may have only a finite number of maxima and
minima and a finite number of finite discontinuities. Any function that can be generated in practice (e.g., in the
laboratory) satisfies the Dirichlet conditions and, thus, has a Fourier transform.
If we now have a general input function f (t) and we express it as an infinite sum of exponential functions (this is
precisely (14)) we have: Z ∞
1
f (t) = F (ω)ejωt dω.
2π −∞
The output corresponding to f (t) is:
Z ∞ Z ∞ Z ∞
1 jω(t−τ )
y(t) = h(t) ∗ f (t) = h(τ )f (t − τ )dτ = h(τ ) F (ω)e dω dτ
−∞ −∞ 2π −∞
Z ∞ Z ∞
1 −jωτ
= F (ω) h(τ )e dτ ejωt dω
2π −∞ (17)
| −∞ {z }
H(ω)= Fourier transform of h(t)
∞
1
Z
= F (ω)H(ω)ejωt dω.
2π −∞
So we have an analogous case to that of Figure 11, where we decomposed the input f (t) as a sum of its impulse
components, and then we recomposed the output, using superposition, as an infinite sum of the responses to the
individual impulses (thus obtaining the convolution integral (12)). Here we have:
System response to an individual expo-
ejωt −→ H(ω)ejωt .
nential:
Input signal f (t) formed as an infinite 1
R∞
f (t) = 2π −∞
F (ω)ejωt dω.
sum of exponential components:
The output y(t) is the sum of the re- 1
R∞
f (t) −→ y(t) = 2π −∞
F (ω)H(ω)ejωt dω.
sponses to the exponential components:
10
Thus, transmission of a signal through a linear system can be viewed as transmission of the various sinusoidal
components of the signal through the system.
Equation (17) also tells us that y(t) is the inverse Fourier transform of the product F (ω)H(ω). That is,
Z ∞ Z ∞
1 jωt 1
y(t) = Y (ω)e dω = F (ω)H(ω) ejωt dω.
2π −∞ 2π −∞ | {z }
Y (ω)
Equation (18) is a fundamental relationship, and says that the Fourier transform of the output of a linear system
is given by the product of the Fourier transform of the input and the Fourier transform of the impulse response of
the system.
So, during transmission through a channel, the input signal amplitude spectrum |F (ω)| is changed to
|F (ω)||H(ω)|, and the input signal phase spectrum F (ω) is changed to F (ω) + H(ω).
An input signal spectral component of frequency ω is modified in amplitude by a factor |H(ω)| and is shifted
in phase by an angle H(ω).
The plots of |H(ω)| and H(ω) as functions of ω show at a glance how the system modifies the amplitudes
and phases of various sinusoidal inputs. For this reason, H(ω) (the Fourier transform of the impulse response) is
called the frequency response of the system (a fundamental concept).
11
Figure 18: Spectrum of pulse function. Figure 19: Amplitude and phase spectra of pulse function.
F F F
Indeed, the sampling property of the impulse function (see (9)) gives:
Z ∞
F[δ(t)] = δ(t)e−jωt dt = e−jω×0 = 1,
−∞
δ(t) ⇐⇒ 1.
Exercise 1
Show that F [2πδ(ω)] = 1; i.e., 1 ⇐⇒ 2πδ(ω).
−1
Exercise 2
Show that F [2πδ(ω − ω0 )] = ejω0 t ; i.e., ejω0 t ⇐⇒ 2πδ(ω − ω0 ).
−1
Exercise 3
Show that cos ω0 t ⇐⇒ π [δ(ω + ω0 ) + δ(ω − ω0 )] .
12
Unit step function u(t)
The unit step function u(t) was defined in Example 1, on Page 9 above (see
Figure 15). Direct application of the Fourier transform formula (13) yields
an indeterminate result. So, we approach the problem by a limit process.
Consider:
F u(t) lim e−at u(t),
F = a→0 Figure 21: Step function as limit of
(see Figure 21). Using the result obtained in Example 1, Page 9, we have exponentials.
1 a ω 1
U (ω) = F [u(t)] = lim F e u(t) = lim
−at
= lim 2 2
−j 2 2
= πδ(ω) + ,
a→0 a→0 a + jω a→0 a +ω a +ω jω
and we thus obtain this Fourier transform pair:
1
u(t) ⇐⇒ πδ(ω) + . (21)
jω
In deriving the last limit we have taken into account the fact that the
function a/(a2 + ω 2 ) has very interesting properties. It looks like the
plot on Figure 22 and, as a → 0, it approaches zero for all ω 6= 0.
However, its area is always equal to
Z ∞
a −1 ω
∞
2 2
dω = tan = π.
−∞ a + ω a −∞
a
Figure 22: Limit of the function Hence, as a → 0, the function 2 → πδ(ω).
a/(a2 + ω 2 ) as a → 0. a + ω2
Symmetry property
If f (t) ⇐⇒ F (ω), then F (t) ⇐⇒ 2πf (−ω).
Example 2
On Page 11 we obtained (see Figure 23):
t ωa
rect ⇐⇒ a sinc .
a | {z 2 }
| {z }
f (t) F (ω) Figure 23: Pulse function and its transform.
Using the symmetry property we obtain (see Fig-
ure 24):
−ω
at ω
a sinc ⇐⇒ 2π rect = 2π rect ,
2 a a
| {z } | {z }
F (t) 2πf (−ω)
Time-shifting property
If f (t) ⇐⇒ F (ω), then f (t − t0 ) ⇐⇒ F (ω)e−jωt0 .
This shows that delaying a signal by t0 seconds does not change its amplitude spectrum. The phase spectrum,
however, is changed by −ωt0 .
Frequency-shifting property
If f (t) ⇐⇒ F (ω), then f (t)ejω0 t ⇐⇒ F (ω − ω0 ).
This property allows us, for example, to
understand the phenomenon of amplitude
modulation. Multiplication of a signal f (t)
by a sinusoid cos ω0 t amounts to modulat-
ing the sinusoid’s amplitude. The sinusoid
cos ω0 t is called the carrier and the signal
f (t) is the modulating signal. Since,
1 jω0 t
+ e−jω0 t ,
cos ω0 t = e
2
we obtain: Figure 25: Amplitude modulation.
1
f (t) cos ω0 t ⇐⇒ [F (ω − ω0 ) + F (ω + ω0 )] .
2
So, multiplication of a signal by a sinusoid of frequency ω0 shifts the spectrum F (ω) by ±ω0 (see Figure 25).
Convolution
Time convolution: f1 (t) ∗ f2 (t) ⇐⇒ F1 (ω)F2 (ω).
If f1 (t) ⇐⇒ F1 (ω) and
f2 (t) ⇐⇒ F2 (ω), then we have: 1
Frequency convolution: f1 (t)f2 (t) ⇐⇒ F1 (ω) ∗ F2 (ω).
2π
14
Notice that we can obtain the Fourier transform of the unit step function u(t) using the time integration
property. Indeed,
f (t) = δ(t) ⇐⇒ F (ω) = 1,
and,
t
1
Z
u(t) = δ(τ )dτ ⇐⇒ + πδ(ω).
−∞ jω
(Compare this last result with (21).)
W ω
h(t) = sinc (W t) ⇐⇒ H(ω) = rect . (22)
π 2W
We can see from Figure 26 that h(t) is non
causal, since h(t) 6= 0 for t < 0 (see the def-
inition of a causal system on Page 6). How can
we obtain a filter with lowpass characteristics but
that can be realised with a causal system?
To tackle this question let us first analyse the
conditions for a distortionless transmission. As Figure 26: Ideal lowpass filter.
can be seen from the frequency response of a sys-
tem in (19)–(20), during transmission through a system some frequency components may be boosted in amplitude
while others are attenuated. The relative phases of the various components also change. In general, the output
waveform will be different from the input waveform. Transmission through a channel is considered distortionless
if the input and the output have identical wave shapes within a multiplicative constant and with the output possibly
(a) Ideal lowpass filter (b) Ideal highpass filter
delayed with respect to the input (i.e., a time-delay is tolerated). Thus, for distortionless transmission, the output
y(t) of a transmission channel, corresponding to an input f (t), must be:
y(t) = kf (t − td ). (23)
Hence:
|H(ω)| = k,
H(ω) = ke−jωtd ⇒
H(ω) = −ωtd .
Therefore, for distortionless transmission the amplitude response |H(ω)| must be a constant and the phase re-
sponse H(ω) must be a linear function of ω with slope −td (td is the time delay). If the slope of H(ω) is not
constant, then different frequency components in the input signal undergo different amounts of time delay and
the output is not a replica of the input waveform. Generally, the human ear is sensitive to amplitude distortion but
relatively insensitive to phase distortion. This is the reason why the manufacturers of audio equipment make only
available the |H(ω)| characteristics of their systems. For video signals the situation is the opposite. The human
eye is sensitive to phase distortion but is relatively insensitive to amplitude distortion. Phase distortion causes
different time delays in different picture elements, resulting in a smeared picture.
Ideal filters allow distortionless transmission of a certain band of frequencies and suppress the remaining fre-
quencies. Figure 27 represents the spectral response of ideal lowpass, highpass and bandpass filters, respectively.
15
(a) Ideal lowpass filter (b) Ideal highpass filter
For(a)the lowpass
Ideal lowpass filter
filter of Figure 27(a) (b)we have:
Ideal highpass filter
ω
H(ω) = rect e−jωtd .
2W
By Equation (22) and the time-shifting property
(see Page 14) we obtain:
ghpass filter ω
h(t) = F
−1
h i
−jωtd
rect e
2W Figure 28: Impulse response of ideal lowpass filter.
(b) Ideal bandpass filter
W
= sinc (W (t − td )) .
π
The impulse response h(t) is shown in Figure 28. Clearly the filter is still non causal and hence not realisable.
One practical approach is to cut off the tail of
h(t) for t < 0:
ĥ(t) = h(t)u(t),
where u(t) is the unit step function defined in
Example 1 on Page 9 (see Figure 15). The
truncated signal ĥ(t) is shown in Figure 29.
Figure 29: Truncated impulse response of ideal lowpass filter. If td is sufficiently large, ĥ(t) will be a close
approximation of h(t) and the resulting filter
Ĥ(ω) will be a good approximation of the ideal filter. A glance at Figure 28 shows that a delay td of three to four
times π/W will make ĥ(t) a reasonably close version of h(t).
We have seen that, for a physically realisable system, h(t) must be causal. That is,
h(t) = 0 for t < 0.
In the frequency domain, this condition is equivalent to the Paley-Wiener criterion, which states that the neces-
sary and sufficient condition for the amplitude response |H(ω)| to be realisable is:
Z ∞
| ln|H(ω)| |
dω < ∞. (25)
−∞ 1 + ω2
Note that if H(ω) = 0 over any finite band (that is, | ln|H(ω)| | = ∞) then condition (25) is not satisfied.
Therefore, for a physically realisable system, H(ω) may be zero at some discrete frequencies, but it cannot be
zero over any finite band. Hence, the ideal characteristics of Figure 27 are not realisable.
16
Case-study of a practical filter
We obtained earlier that the circuit represented in Figure 3 can be described by the differential equation (6):
dvo (t) 1 1
+ vo (t) = − vi (t).
dt CRf CRi
Applying the Fourier transform to the above equation and using the time differentiation property (see Page 14)
we obtain:
1 1
jωVo (ω) + Vo (ω) = − Vi (ω),
CRf CRi
that is,
1
− CR
Vo (ω) = i
1 Vi (ω).
jω + CR
| {z f}
H(ω)
By inspecting the above equation and recalling (18) we conclude that the frequency response of the filter (i.e., the
Fourier transform of its impulse response) is:
1
− CR
H(ω) = i
1 . (26)
jω + CRf
(If you need further convincing, consider that the Fourier transform of the unit impulse function, see Page 12, is
F[δ(t)] = 1. Hence, when the input is vi (t) = δ(t), the Fourier transform Vo (ω) of the output—i.e., the Fourier
transform of the impulse response—is precisely H(ω), since Vo (ω) = H(ω)Vi (ω) = H(ω) × 1 = H(ω).)
From Example 1 on Page 9 we can deduce that the expression for the impulse response in the time domain is:
1 − CR1 f t
h(t) = − e u(t),
CRi
which coincides with the expression we obtained on Page 7 after having solved the differential equation.
On Page 5 we mentioned that the impulse response can be readily obtained using Laplace transforms. We can
see from the above derivations that the same can be achieved by using Fourier transforms.
To plot the frequency response of the filter (see Page 11)
we need to compute the amplitude and phase of H(ω).
Rewriting (26) we obtain:
−(Rf /Ri )
H(ω) = ,
jCRf ω + 1
Rf /Ri
|H(ω)| = p ,
(CRf ω)2 + 1
17
Data truncation: Window functions
We often need to truncate data, e.g., in numerical computations we have to deal with data of finite duration.
Another example is to make the response of an ideal filter causal, as seen in Figures 28–29. In signal sampling,
to eliminate aliasing we will see later that we need to truncate the signal’s spectrum beyond the half sampling
frequency ωs /2 using an anti-aliasing filter.
Truncation can be regarded as multiplying a signal of a large width by a window function of a smaller width.
Spectral spreading
Consider a signal f (t) and a window w(t). If f (t) ⇐⇒ F (ω) and w(t) ⇐⇒ W (ω), then the frequency convolu-
tion property of the Fourier transform (see Page 14) gives:
1
fw (t) = f (t)w(t) ⇐⇒ Fw (ω) = F (ω) ∗ W (ω). (27)
2π
Hence we can see by the width property of convolution (see Page 6) that the width of Fw (ω) is the sum of the
widths of F (ω) and W (ω). Thus, truncation of a signal increases its bandwidth (spreads its spectrum).
Leakage
The window function W (ω) is really not strictly bandlimited (as we will see later in a tutorial problem, a signal
cannot be simultaneously timelimited and bandlimited) and its spectrum goes to 0 only asymptotically. This
causes the spectrum of F (ω) to leak in the band where it is supposed to be zero.
window wT (t) of width T (bottom part). (For comparison, we also show with a dashed-line in the bottom part the
effect of the rectangular window.)
18
'(")*+#!&
'(+"#,-.%&
!"#$#%"&
'(")*+#!&
'(+"#,-.%&
As can be seen from (27), the effect of windowing in the frequency domain is given by the convolution of
the corresponding spectra. The result of convolving H(ω) with the Fourier transform of the rectangular window,
WR (ω), is shown in the top part of Figure 32. And the result of convolving H(ω) with the Fourier transform of
the triangular window, WT (ω), is shown in the bottom part of Figure 32.
Notice the spectral spreading at the edges of the spectra since, instead of a sudden switch, there is a gradual
transition from the passband to the stopband of the filter. The transition band is smaller (2π/T rad/sec) for the
rectangular window compared to the triangular window (4π/T rad/sec). In fact, among all the windows of a given
width, the rectangular window has the smallest spectral spread (this is because spectral spreading is determined
by the width of the mainlobe, and this width is minimal for the rectangular window).
Also notice that, although H(ω) is bandlimited, the windowed filters are not. The stopband behaviour is
superior in the triangular case than in the rectangular case. For the rectangular window, the leakage in the stopband
decreases slowly (as 1/ω) compared to that of the triangular window (1/ω 2 ). Moreover, the rectangular case has
a higher peak sidelobe amplitude compared to that of the triangular window. This is because leakage is produced
by a slow decay of the sidelobes of the window spectrum. For a signal with jump discontinuity (as the rectangular
window) the Fourier spectrum decays as 1/ω, while for a continuous signal whose derivative is discontinuous (as
the triangular window) the Fourier spectrum decays as 1/ω 2 .
In general, for a given window width, the remedies for the two effects (spectral spreading and leakage) are
incompatible; improving one deteriorates the other, and vice versa.
To reduce the spectral spread (mainlobe width) we need to increase the window’s width (which decreases the
signal’s bandwidth, as already noted from the scaling property of the Fourier transform—see Page 14).
To improve the leakage behaviour we need to select a suitably smooth window (with a faster decay of its
spectrum).
Thus, we can remedy both side effects of truncation by selecting a suitably smooth window of sufficient
width. Some other popular examples of window functions are the Hanning window (faster sidelobe decay) and
the Hamming window (smallest sidelobe magnitude for a given mainlobe width). If fact, there are hundreds of
windows, each with different characteristics, and the choice depends on the particular application. (For more
details on different window functions and their characteristics, see Section 4.9 and, in particular, Table 4.3 on
Page 305 of Lathi’s textbook.)
19
Brief review of the Laplace transform
In this course we will not give the Laplace transform a great deal of attention. We will take it as assumed
knowledge. However, most of what you will need to know is briefly reviewed here.
is equal to
1
F (ω) = , when a > 0.
jω + a
Figure 33: Growing exponential.
However, the Fourier transform does not converge when a < 0, in
which case the function looks like the plot of Figure 33.
But, how about if we could devise a transform with some ‘built-in
convergence’? Suppose we multiply f (t) by the factor e−σt , with
σ > −a. Now, the function fˆ(t) = e−at u(t)×e−σt = e−(a+σ)t u(t)
looks like the plot of Figure 34, and it is Fourier transformable
since a + σ > 0. Indeed,
Z ∞ Z ∞
F̂ (ω) = fˆ(t)e −jωt
dt = e−(a+σ)t u(t)e−jωt dt
−∞ −∞
1 1
Figure 34: Decaying exponential. = = , for σ > −a.
%&'()$ *+*),-$ jω +.()'()$
(a + σ) (σ + jω) +a
| {z }
s
Such a transform (with the!"#$‘built-in convergence’ factor e−σt ) is known as the Laplace transform, which is an
extension of the Fourier transform by generalising the frequency jω to the ‘complex frequency’ s = σ + jω.
For a causal signal f (t) [f (t) = 0 for t < 0], the Laplace transform is:
Z ∞
F (s) = L [f (t)] = f (t)e−st dt,
0−
!"#$
where s = σ + jω .
When a > 0:
20
When a < 0:
As we can see, the definition of the Laplace transform is identical to the Fourier transform with jω replaced
by s. Indeed, when the region of convergence (ROC) of the Laplace transform F (s) of a function f (t) includes
the imaginary jω-axis, then the Fourier transform can be obtained simply by replacing s by jω:
F (jω) = F (s)|s=jω (29)
When the ROC of the Laplace transform does not contain the imaginary axis, the connection between the
Fourier and the Laplace transforms is not so simple (but we do not need to worry about this technicality in this
course!).
Out of the several properties of the Laplace transform, the main property we will need is the
time differentiation property: If f (t) ⇐⇒ F (s), then,
dn f (t)
⇐⇒ sn F (s) − sn−1 f (0− ) − sn−2 f˙(0− ) − . . . − f (n−1) (0− ). (30)
dtn
or,
bn sn + bn−1 sn−1 + . . . + b1 s + b0
Y (s) = F (s).
sn + an−1 sn−1 + . . . + a1 s + a0
| {z }
H(s)
Exercise 4
Show that L [δ(t)] = 1.
From the above exercise and (32) we conclude that the transfer function H(s) is the Laplace transform of the
impulse response h(t) (since, when f (t) = δ(t), F (s) = 1 and, hence, Y (s) = H(s)F (s) = H(s) × 1 = H(s)).
On Page 11 we defined the frequency response of a system to be H(ω), equal to the Fourier transform of the
impulse response. (Recall that Lathi uses both H(ω) and H(jω) to represent the same entity.)
When the system is causal and asymptotically stable, all the poles of H(s) lie in the left half plane (LHP)
1
and the ROC for H(s) includes the imaginary jω-axis. For example, H = is asymptotically stable when
s+a
a > 0 and the ROC is depicted in Figure 35. As we have seen in (29), the frequency response can in this case be
obtained by replacing s = jω in the transfer function of the system H(s). We thus conclude:
For the effect of H(jω) on the amplitude and phase spectra of a signal, see (19)–(20).
In practice, the sinusoidal signals we can generate in the laboratory are of the form:
that, for a real function h(t), |H(−jω)| = |H(jω)| and H(−jω) = − H(jω). We thus obtain,by applying
ejθ jωt e−jθ −jωt
1 j(ωt+θ) −j(ωt+θ)
superposition, that the response to f (t) = cos (ωt + θ) = e +e = e + e
2 2 2
is given by:
jθ −jθ
e e
y(t) = H(jω)e + jωt
H(−jω)e−jωt
2 2
1 h j(ωt+θ) j H(jω) −j(ωt+θ) −j H(jω)
i
= e |H(jω)|e +e |H(jω)|e
2
1 j ωt+θ+ H(jω)
−j ωt+θ+ H(jω)
= |H(jω)| e +e
2
= |H(jω)| cos (ωt + θ + H(jω)).
We thus conclude:
Equation (34) says that, for a sinusoidal input of frequency ω rad/sec, the system’s response is also a sinusoid of
the same frequency ω. The amplitude of the output sinusoid is |H(jω)| times the input’s amplitude, and the phase
of the output sinusoid is shifted by H(jω) with respect to the input phase.
Clearly, the plots of |H(jω)| and H(jω) as functions of ω show at a glance how the system modifies the
amplitudes and phases of various sinusoidal inputs.
22
Filter design by placement of the poles and zeros of H(s)
If we factorise the numerator and denominator of H(s) in (33) we obtain:
(s − z1 )(s − z2 ) . . . (s − zn )
H(s) = bn , (35)
(s − p1 )(s − p2 ) . . . (s − pn )
where z1 , z2 , . . . , zn are the zeros of H(s) and p1 , p2 , . . . , pn are the poles of H(s). Hence, the frequency response
(see the previous page) is:
(jω − z1 )(jω − z2 ) . . . (jω − zn )
H(jω) = bn . (36)
(jω − p1 )(jω − p2 ) . . . (jω − pn )
Each factor (jω − zi ) in the numerator of (36) is a complex
Im number represented by a vector from zi to jω (see Figure 38)
or, in polar form, by ri ejφi (where ri is the magnitude and φi
is the angle of the vector). Similarly, each factor (jω − pi )
in the denominator of (36) is a complex number represented
by a vector from pi to jω (see Figure 38) or, in polar form,
Re by di ejθi (where di is the magnitude and θi is the angle of
the vector). Hence we can rewrite (36) as:
K
|H(jω)| = ,
d
Figure 39: Gain enhancement by a pole.
(since d0 is relatively bigger than d and,
hence, its variations are not so significant). The frequency response is represented in the middle and right plots of
Figure 39. Therefore, we conclude that we can enhance the gain at a frequency ω0 by placing a pole opposite the
point jω0 .
23
Gain suppression by a zero
Im
Consider the situation represented in Fig-
ure 40, where a zero has been placed op-
Im
posite the frequency ω0 . By a similar
analysis to the previous one we can con-
clude that a zero has the opposite effect.
Re
If the zero in placed at −α ± jω0 , it will
suppress the gain in the vicinity of ω0 .
The frequency response is represented in
the middle and rightReplots of Figure 40.
Im Figure 40: Gain suppression by a zero.
Im
Lowpass filters
Imgain
An ideal lowpass filter has a constant
of unity up to frequency ωc and the gain
drops suddenly to 0 for ω > ωc (see Fig-
ure 42). We already noticed previously that
such an amplitude response is not physi-
cally realisable since it is zero over a band
of frequencies (in this case, over an infinite Re
band of frequencies) and, hence, it does not
satisfy the Paley-Wiener criterion (see (25)
above). Figure 42: Amplitude response of lowpass (Butterworth) filters.
Im Im Im Im
Re Re Re Re
24
Re
Re
frequency band from 0 to ωc (and from 0 to −ωc for the conjugate poles) as shown in Figure 43. It can be shown
that for a maximally flat response over the frequency range 0 to ωc the wall is a semicircle with poles uniformly
distributed, as shown in Figure 44 for n = 5. Figure 42 above shows the amplitude responses for various values
of n when the Im poles are placed on a semicircle as in Figure 44. As n → ∞, the filter response approaches the
ideal lowpass response. This family of filters is known as Butterworth filters. Another family of commonly used
filters is that of the Chebyshev filters, in which the wall shape is a semi-ellipse. The characteristics of Chebyshev
filters are inferior to those of Butterworth filters over the passband 0 to ωc (they have a rippling effect) but in the
stopband, ωc to ∞, their behaviour is superior (the gain drops faster than in the Butterworth filters).
Re Re
Bandpass filters
In the case of a bandpass filter, we need enhanced gain over the entire passband, as shown in Figure 45.
Im Im
Re
Re
25
Lowpass filter Bandpass filter
Butterworth filters
The amplitude response |H(jω)| of an nth order Butterworth lowpass filter is:
1
|H(jω)| = r 2n . (38)
ω
1+ ωc
26
The poles of H(s)H(−s) satisfy
Im
Im
2n 2n
s = −j .
We can proceed in this way to find H(s) for any value of n. In general,
1 1
H(s) = = n , (40)
Bn (s) s + an−1 sn−1 + . . . + a1 s + 1
where Bn (s) is the Butterworth polynomial of nth order. The design of Butterworth filters is facilitated by
ready-made tables of the coefficients of Bn (s) for various values of n. For example, see Tables 7.1 and 7.2 on
Page 509 of Lathi’s textbook. There are also Matlab
c
functions that give the Butterworth filter transfer function,
for example the function buttap.
27
Frequency scaling
The procedure explained above, and the tables with coefficients of Butterworth filters, e.g., Tables 7.1 and 7.2
on Page 509 of Lathi’s textbook, are for normalised Butterworth filters with 3 dB bandwidth ωc = 1. The
results can be extended to any value of ωc by replacing s by s/ωc (this implies replacing ω by ω/ωc in (39), thus
obtaining (38)). For instance, from the previous example = 4) we can obtain a fourth-order Butterworth filter
s (n
with ωc = 10 by replacing s by s/10, i.e., H(s) = H , yielding,
10
1
H(s) =
s 4 s 3
s 2 s
10
+ 2.6131+ 3.4142 10
10
+ 2.6131 10 +1
10, 000
= 4 .
s + 26.131s + 341.42s2 + 2, 613.1s + 10, 000
3
The amplitude response |H(jω)| of this filter is identical to that of the normalised filter |H(jω)| in Equation (39)
(shown in Figure 48), expanded by a factor of 10 along the horizontal ω-axis (frequency scaling).
28
passband to the stopband) will be increased (notice how the roll-off increases with n in Figure 48). This means
that, if we compute ωc from (44) then the response will satisfy exactly the requirement Gp over the passband
0 ≤ ω ≤ ωp and will surpass the requirement Gs on the stopband ω ≥ ωs (see the top-left plot of Figure 47). On
the other hand, the use of (45) to compute ωc will exactly satisfy the requirement on Gs but will oversatisfy the
requirement for Gp . (The choice is yours!)
Chebyshev filters
The amplitude response of a normalised Chebyshev lowpass filter is:
1
|H(jω)| = p , (46)
1 + ε2 Cn 2 (ω)
Form (47) is most convenient for |ω| < 1 and form (48) is convenient for |ω| > 1. The Chebyshev polynomial
has the property
Cn (ω) = 2ωCn−1 (ω) − Cn−2 (ω), n > 2.
Thus, we can find the polynomials recur-
sively as follows:
C0 (ω) = cos 0 = 1,
C1 (ω) = cos 1 × cos−1 ω = ω,
29
4. The Chebyshev polynomials are equal-ripple functions, hence the ripples in the passband are of equal
height. The parameter ε controls the height of the ripple, the ratio of the maximum gain to the minimum
gain in the passband is:
√ √
r = 1 + ε2 , or, in dB: r̂ = 20 log10 1 + ε2 = 10 log10 1 + ε2 ,
Kn Kn
H(s) = 0
= n n−1
, (51)
Cn (s) s + an−1 s + . . . + a1 s + a0
where the polynomial Cn0 (s) can be found in ready-made tables (e.g., see Table 7.4 on Page 518 of Lathi’s
textbook). There are also Matlab
c
functions that give the Chebyshev filter transfer function, for example the
function cheb1ap.
Im
The constant Kn in (51) is selected to have proper DC gain
|H(0)| according to (49). Thus,
a0 when n is odd,
Kn = a0 a0 (52) Re
√ = r̂/20 when n is even.
1+ε 2 10
Frequency scaling
The procedure explained above, and the tables with coefficients of Chebyshev filters, e.g., Table 7.4 on Page 518
of Lathi’s textbook, are for normalised Chebyshev filters with ωp = 1. The results can be extended
to any value
s
of ωp by replacing s by s/ωp in the normalised transfer function H(s). That is, H(s) = H . The resulting
ωp
amplitude response is then obtained from (46),
1
|H(jω)| = r . (53)
2
1 + ε Cn ωωp
2
30
Determination of the filter order n
Suppose a gain Ĝs in dB (recall the definition of dB in (37)) is specified at frequency ωs for a lowpass Chebyshev
filter, as shown on the top-left plot of Figure 47. From (37) and (53) we obtain:
2 2 ωs
Ĝs = 20 log10 |H(jωs )| = −10 log10 1 + ε Cn ,
ωp
or,
ωs
2
ε Cn 2
= 10−Ĝs /10 − 1.
ωp
Use of (48) and (50) in the above expression yields
" −Ĝs /10 #1/2
−
ωs 10 1
cosh n cosh−1 = .
ωp 10r̂/10 − 1
Finally, h i1/2
−1 10−Ĝs /10 −1
cosh 10r̂/10 −1
n= .
cosh−1 (ωs /ωp )
Frequency transformations
We can obtain transfer functions of highpass, bandpass and bandstop filters using frequency transformations on
a basic lowpass filter, called the prototype filter. The prototype lowpass filter, denoted Hp (s), can be, e.g., a
Butterworth or a Chebyshev filter. We then replace s in the prototype filter with a proper transformation T (s), as
explained below.
Highpass filters
Given the highpass filter specifications shown on the left plot of Figure 53, we first design a prototype lowpass
filter Hp (s) with the specifications shown on the right plot of Figure 53, and then we replace s with T (s) in Hp (s),
where
ωp
T (s) = . (54)
s
To see how this transformation works, note that when s = jω we have:
Bandpass filter ωp ωp Prototype lowpass filter
T (jω) = = −j ,
jω ω
and, hence, when ω → 0 we have that T (jω) → −j∞. We can then see that the resulting transformed filter
has, at low frequencies (ω → 0), the characteristics of the prototype lowpass filter at high (negative) frequencies
31
(T (jω) → −j∞). Recalling, from (16), that the amplitude response of the filter has conjugate symmetry, that
is |Hp (−jω)| = |Hp (jω)|, we can then see that the transformed filter has, at low frequencies, the characteristics
of the prototype lowpass filter at high positive frequencies as well and, thus, it attenuates the low frequencies as
required on the left plot of Figure 53.
ωp
Performing a similar analysis, and noticing that, when ω = ωs , T (jωs ) = −j ; when ω = ωp , T (jωp ) =
ωs
−j1; and, when ω → ∞, T (jω) → 0, we can see that when the prototype lowpass filter satisfies the specifications
on the right plot of Figure 53, the transformed filter satisfies the original highpass filter specifications given on
the left plot of the figure.
Given the bandpass filter specifications shown on the left plot of Figure 54, we first design a prototype lowpass
filter Hp (s) with the specifications shown on the right plot of Figure 54, where ωs is given by:
s2 + ωp1 ωp2
T (s) = . (56)
(ωp2 − ωp1 ) s
To see how this transformation works, note that when s = jω we have:
−ω 2 + ωp1 ωp2 ω 2 − ωp1 ωp2
T (jω) = =j .
(ωp2 − ωp1 ) jω (ωp2 − ωp1 ) ω
Noticing that:
ω → 0 ⇒ T (jω) → −j∞,
ωs1 2 − ωp1 ωp2 ωp1 ωp2 − ωs1 2
ω = ωs1 ⇒ Im {T (jωs1 )} = Im j =− ≤ −ωs ,
(ωp2 − ωp1 ) ωs1 (ωp2 − ωp1 ) ωs1
ωp1 2 − ωp1 ωp2
ω = ωp1 ⇒ T (jωp1 ) = j = −j1,
(ωp2 − ωp1 ) ωp1
ωp2 2 − ωp1 ωp2
ω = ωp2 ⇒ T (jωp2 ) = j = j1,
(ωp2 − ωp1 ) ωp2
2
− ωp1 ωp2 ωs2 2 − ωp1 ωp2
ωs2
ω = ωs2 ⇒ Im {T (jωs2 )} = Im j = ≥ ωs ,
(ωp2 − ωp1 ) ωs2 (ωp2 − ωp1 ) ωs2
ω → ∞ ⇒ T (jω) → j∞,
32
and performing a similar analysis to the one above, for the case of a highpass filter, we can see that when the
prototype lowpass filter satisfies the specifications on the right plot of Figure 54, the transformed filter satisfies
the original bandpass filter specifications given on the left plot of the figure.
Bandstop filters
Given the bandstop filter specifications shown on the left plot of Figure 55, we first design a prototype lowpass
filter Hp (s) with the specifications shown on the right plot of Figure 55, where ωs is given by:
(ωp2 − ωp1 ) s
T (s) = . (58)
s2 + ωp1 ωp2
Noticing that:
ω → 0 ⇒ T (jω) → 0,
(ωp2 − ωp1 ) ωp1
ω = ωp1 ⇒ T (jωp1 ) = j = j1,
−ωp1 2 + ωp1 ωp2
(ωp2 − ωp1 ) ωs1 (ωp2 − ωp1 ) ωs1
ω = ωs1 ⇒ Im {T (jωs1 )} = Im j 2
= ≥ ωs ,
−ωs1 + ωp1 ωp2 −ωs1 2 + ωp1 ωp2
(ωp2 − ωp1 ) ωs2 (ωp2 − ωp1 ) ωs2
ω = ωs2 ⇒ Im {T (jωs2 )} = Im j 2
=− ≤ −ωs ,
−ωs2 + ωp1 ωp2 ωs2 2 − ωp1 ωp2
(ωp2 − ωp1 ) ωp2
ω = ωp2 ⇒ T (jωp2 ) = j = −j1,
−ωp2 2 + ωp1 ωp2
ω → ∞ ⇒ T (jω) → 0,
and performing a similar analysis to the ones above, for the cases of highpass and bandpass filters, we can see
that when the prototype lowpass filter satisfies the specifications on the right plot of Figure 55, the transformed
filter satisfies the original bandstop filter specifications given on the left plot of the figure.
33
Sampling
Brief review of Fourier series
Recall that a signal f (t) is periodic with period T0 if
where T0 is the smallest value such that (59) is satisfied. Recall also that a periodic signal with period T0 can be
expressed as an exponential Fourier series:
∞
X
f (t) = Dn ejnω0 t , (60)
n=−∞
2π
where ω0 = is the fundamental frequency. Equation (60) expresses f (t) as a (possibly infinite) sum of
T0
exponential functions of frequencies nω0 (i.e., integer multiples of the fundamental frequency), called the
nth harmonics. Finding the coefficients Dn of the series (60) is quite simple, we just have to compute the follow-
ing integral over an interval of duration T0 :
1
Z
Dn = f (t)e−jnω0 t dt. (61)
T0 T0
(Where the interval of duration T0 is located on the real axis does not matter since f (t)e−jnω0 t is periodic with
period T0 .)
What about the Fourier
R∞ transform of such a signal?
Since a periodic signal never extinguishes, the first
Dirichlet condition −∞ |f (t)|dt < ∞, see Page 10 is not satisfied! However, notice that those conditions
are only sufficient (but not necessary). Anyway, you would expect something remarkable to happen with such
a signal. Intuitively, one would expect such a signal to have its spectrum concentrated at the fundamental and
harmonic frequencies. Effectively, applying the result of Exercise 2 on Page 12 to (60) we obtain
∞
X
F (ω) = 2π Dn δ(ω − nω0 ), (62)
n=−∞
that is, not only is the spectrum concentrated at the harmonic frequencies, it actually shoots to infinity at those
frequencies!
As an example, let us consider a remarkable signal, the
impulse train (also known as the ‘Dirac comb’) with pe-
riod T , shown in Figure 56,
∞
X
δT (t) = δ(t − nT ). (63)
n=−∞ Figure 56: Impulse train with period T .
2π
The frequency of this signal is ωs = = 2πFs (where Fs is the frequency expressed in hertz = cycles/sec).
T
We obtain Dn from (61),
1 T /2
Z
Dn = δT (t)e−jnωs t dt.
T −T /2
In the impulse train (see Figure 56) there is only one impulse in the interval [−T /2, T /2], that is, δ(t). Hence,
1 T /2 1 1
Z
Dn = δ(t)e−jnωs t dt = e−jnωs ×0 = ,
T −T /2 T T
where we have used the sampling property of the impulse function (see (9)).
34
Hence, δT (t) can be expressed as a Fourier series (60), i.e.,
∞
1 X jnωs t
δT (t) = e , ωs = 2π/T.
T n=−∞
Thus, we have derived Fourier pair 21 in Table 4.1 of Lathi’s textbook, Page 252,
∞
X ∞
X
δT (t) = δ(t − nT ) ⇐⇒ ωs δ(ω − nωs ), ωs = 2π/T. (64)
n=−∞ n=−∞
Lowpass
filter
impulse train δT (t) consisting of unit impulses repeated periodically every T seconds (T = 1/Fs ), as expressed
by (63) and shown in Figure 56. The result is the sampled signal f¯(t) shown on the bottom-left plot of Figure 57,
consisting of impulses spaced every T seconds, each weighted by the value of the function at that instant,
∞
f¯(t) = f (t)δT (t) =
X
f (nT )δ(t − nT ).
n=−∞
35
Therefore,
∞
1 X
F̄ (ω) = F (ω − nωs ). (65)
T n=−∞
2π 1
Thus, the spectrum of f¯(t) is the spectrum of f (t) repeated periodically with period ωs = rad/sec, or Fs =
T T
Hz, and divided by T , as shown on the bottom-right plot of Figure 57.
If we want to reconstruct f (t) from f¯(t) we should be able to recover F (ω) from F̄ (ω). As can be seen from
Figure 57, this recovery is possible if there is no overlap between successive cycles of F̄ (ω). That is, we require:
Fs ≥ 2B,
Nyquist-Shannon sampling theorem: or, equivalently, (66)
1
T ≤ 2B .
As long as the sampling frequency Fs is greater than twice the signal bandwidth B (in hertz), F̄ (ω) will
consist of non overlapping repetitions of F (ω) and f (t) can be recovered from its samples f¯(t) by passing the
sampled signal f¯(t) though an ideal lowpass filter of bandwidth B Hz. The frequency Fs = 2B is called the
1
Nyquist rate for f (t). The sampling interval T = is called the Nyquist interval for f (t).
2B
∞
X ∞
X
f (t) = f (nT )h(t − nT ) = f (nT ) sinc [2πB(t − nT )] . (67)
n=−∞ n=−∞
Equation (67) is the interpolation formula that yields the values of f (t) between samples as a weighted sum
of all the sample values. The process of signal reconstruction by interpolation is illustrated in Figure 60.
36
Sampled signal Reconstructed signal
2- Aliasing
Recovered spectrum
All practical signals are timelimited, and a signal cannot be simultaneously timelimited and bandlimited (see
the tutorial problems). As a result, there will always be an amount of overlap between the repetitions of the
spectrum of F (ω) and, hence, parts of the spectrum (the tail beyond the half sampling frequency, Fs /2) get
37
folded back producing an effect known as aliasing, illustrated in Figure 63. A practical solution is to use an
antialiasing filter (a lowpass filter) “before the signal is sampled”, so as to suppress the frequency components
beyond the folding frequency Fs /2.
Practical sampling
We assumed before that ideal samples are obtained by multiplying a signal f (t) by an impulse train (as illustrated
in Figure 57), which is physically nonexistent. In practice, we multiply a signal by a train of pulses of finite width,
as shown on the left part of Figure 64.
Lowpass
filter
Since the pulse train pT (t) is periodic, we can express it as a Fourier series (60), that is,
∞
X 2π
pT (t) = Dn ejnωs t , ωs = .
n=−∞
T
Hence, by a similar analysis to the one performed on Page 35 for the case of ideal sampling we obtain:
∞ ∞
1 1
F (ω) ∗ F [pT (t)] =
X X
F̄ (ω) = F (ω) ∗ 2π Dn δ(ω − nωs ) = Dn F (ω − nωs ).
2π 2π n=−∞ n=−∞
The spectrum F̄ (ω) of the sampled signal is shown on the bottom-right plot of Figure 64. Clearly, the signal f (t)
can be recovered by lowpass filtering f¯(t), provided ωs > 4πB (i.e., Fs > 2B).
38
The dual of the above theorem is the frequency-sampling theorem, which states that the spectrum F (ω) of
a signal timelimited to τ seconds (signal width) can be reconstructed from the samples of F (ω) taken at a rate
R ≥ τ samples per hertz (see (71) below).
Consider a timelimited signal f (t) as shown in Figure 65.
We now construct a periodic signal fT0 (t), formed by repeating f (t) every T0 seconds (with T0 > τ ), as shown in
Figure 66.
The periodic signal fT0 (t) can be expressed by an exponential Fourier series (60):
∞
X 2π
fT0 (t) = Dn ejnω0 t , ω0 = ,
n=−∞
T0
where, assuming that τ < T0 , the coefficients are computed using (61):
Z T0 Z τ
1 −jnω0 t 1
Dn = f (t)e dt = f (t)e−jnω0 t dt.
T0 0 T0 0
We can see from (69) that
1
Dn = F (nω0 ),
T0
1
that is, the coefficients of the Fourier series of fT0 (t) are times the samples of F (ω) taken at intervals of
T0
ω0 . As long as τ ≤ T0 the successive cycles of f (t) do not overlap, so that f (t) can be recovered from fT0 (t).
ω0 1
Thus, the condition for recovery is T0 ≥ τ . Equivalently, since F0 = = ,
2π T0
1
F0 ≤ Hz, (70)
τ
39
or, in terms of the sampling rate R (samples/Hz),
1 samples
R= ≥τ , (τ = signal width). (71)
F0 hertz
Condition (71) on the sampling of the spectrum for recovery of a signal is the dual of condition (68) on the
sampling of the time-signal for recovery.
(a) (b)
(c) (d)
(e) (f)
Figure 67: Sampling and periodic repetition of a signal results in sampling and periodic repetition of its spectrum.
Consider a timelimited signal f (t) as shown in Figure 67(a) and its spectrum shown in Figure 67(b). The
spectrum of the sampled signal f¯(t) consists of F (ω) repeated every Fs = 1/T (Figure 67(d)). Then, the sampled
signal f¯(t) is repeated periodically every T0 seconds (Figure 67(e)). According to the spectral sampling theorem
(explained in the preceding section), such an operation results in sampling the spectrum at a rate of T0 samples
per hertz (Figure 67(f)).
In conclusion, when a signal f (t) is sampled and periodically repeated, the corresponding spectrum is also
sampled and periodically repeated. The discrete Fourier transform (DFT) relates the samples of f (t) to the
samples of F (ω).
Number of samples: Let N0 be the number of samples of f (t) in one period T0 . We can see from Figure 67(e)
T0
that N0 = .
T
Let N0 0 be the number of samples of the spectrum in one period Fs . We can see from Figure 67(f) that
Fs
N0 0 = .
F0
1 1 T0 Fs
Since Fs = and F0 = , we have that N0 = = = N0 0 . That is,
T T0 T F0
N0 = N0 0 .
40
So, we conclude that, interestingly, the number N0 of samples of the signal in Figure 67(e) in one period T0 is
identical to the number N0 0 of samples of the spectrum in Figure 67(f) in one period Fs .
The sampled signal f¯(t) in Figure 67(c) can be expressed as:
N 0 −1
f¯(t) =
X
f (kT )δ(t − kT ).
k=0
Since δ(k − kT ) ⇐⇒ e−jkωT (prove it!), the Fourier transform of f¯(t) is:
0 −1
NX
F̄ (ω) = F f (t) =
¯ f (kT )e−jkωT .
k=0
F (ω)
From Figure 57 and Equation (65) we have (assuming negligible aliasing) that F̄ (ω) is in the interval
T
ωs ωs
− ≤ ω ≤ . Thus,
2 2
N0 −1
X ωs
F (ω) = T F̄ (ω) = T f (kT )e−jkωT , |ω| ≤ .
k=0
2
2π
The samples of F (ω) at multiples of ω0 = are:
T0
N
X 0 −1
2π 2π
If we call ω0 T = Ω0 = T = , and fk = T f (kT ), then
T0 N0
N 0 −1
X 2π
Fr = fk e−jrΩ0 k , Ω0 = ω0 T = .
k=0
N0
Fr = fm e−jrΩ0 m .
m=0
Noticing that
N 0 −1
(
X N0 when m = k,
ej(k−m)Ω0 r =
r=0
0 when m 6= k,
N
X0 −1
N0 −1
1 X 2π
fk = Fr ejrΩ0 k , Ω0 = ω0 T = .
N0 r=0 N0
41
We have thus found a relationship between the samples of f (t) (Figure 67(e)) and the samples of F (ω)
(Figure 67(f)):
2π
Sample F (ω): Fr = F (rω0 ), ω0 =
T0
The relationship between these sampled signals is given by the equations derived in the previous page, namely:
N 0 −1
X 2π
Fr = fk e−jrΩ0 k , Ω0 = ω0 T = (72)
k=0
N0
N0 −1
1 X
fk = Fr ejrΩ0 k (73)
N0 r=0
Equation (72) defines the direct discrete Fourier transform (DFT), and equation (73) defines the
inverse discrete Fourier transform (IDFT):
fk ⇐⇒ Fr
The sequences fk and Fr are N0 –periodic (see Figures 67(e) and 67(f)), so it only makes sense to find their
values at k = 0, 1, . . . , N0 − 1 and r = 0, 1, . . . , N0 − 1.
The DFT (and IDFT) relationships are transforms in their own right and are exact. However, when we identify
fk and Fr as the samples of a continuous-time signal f (t) and its spectrum F (ω), then the DFT relationships are
approximations, because of the aliasing and leakage effects (compare Figures 67(b) and 67(f)).
1. Linearity
If fk ⇐⇒ Fr and gk ⇐⇒ Gr , then,
a1 fk + a2 gk ⇐⇒ a1 Fr + a2 Gr .
2. Conjugate symmetry
From the conjugate symmetry property of the Fourier transform [recall (16)] we have that, for a real signal fk ,
F−r = Fr ∗ .
FN0 −r = Fr ∗ . (74)
Because of this property we need to only compute half the DFT for real signals fk .
42
4. Frequency-shifting
If fk ⇐⇒ Fr , then,
fk ejkΩ0 m ⇐⇒ Fr−m .
5. Circular convolution
fk ~ gk = fn gk−n
n=0
N
X 0 −1
= gn fk−n .
n=0
[fk ~ gk ]|k=0 = f0 g0 + f1 g3 + f2 g2 + f3 g1 ,
and, at k = 1,
[fk ~ gk ]|k=1 = f0 g1 + f1 g0 + f2 g3 + f3 g2 ,
and so on.
fk ~ gk ⇐⇒ Fr Gr , (75)
and,
1
fk gk ⇐⇒ Fr ~ Gr . (76)
N0
43
Define: 2π
−j N
WN0 = e 0 = e−jΩ0 . (77)
Note that, 2
2π 2π
−j N ×2 −j N
W N0 = e 0 = e 0 = WN0 2 , (78)
2
and,
N0 N0
WN0 r+( 2 ) = W ( N20 ) W r = e−j N2π0 × 2
WN0 r = e−jπ WN0 r = −WN0 r . (79)
N0 N0
Fr = fk WN0 kr , 0 ≤ r ≤ N0 − 1. (80)
k=0
N0
Now, divide the N0 –point sequence fk into two –point subsequences:
2
f0 , f2 , f4 , . . . , fN0 −2 , f1 , f3 , f5 , . . . , fN0 −1 (81)
| {z } | {z }
subsequence gk subsequence hk
Note that we can also split the sum in (80) into two sub-sums, as follows,
N0 N0
2
−1 2
−1
X X
Fr = f2k WN0 2kr + f2k+1 WN0 (2k+1)r ,
k=0 k=0
N0 N0
Thus, we can compute the first points of Fr using (82) and the last points using (83). That is,
2 2
N0
Fr = Gr + WN0 r Hr , 0≤r≤ − 1,
2 (84)
r N0
Fr+( N0 ) = Gr − WN0 Hr , 0≤r≤ − 1.
2 2
In
conclusion,
an N0 –point DFT can be computed by combining two
N0 Figure 70: Butterfly structure.
–point DFTs as in (84). Equations (84) can be conveniently rep-
2
resented with a butterfly structure, as shown in Figure 70.
For example, for N0 = 8 (8-point DFT), the first step of the FFT algorithm consists in computing two 4-point
DFTs and then combining them according to (84) and Figure 70. This is shown in Figure 71. The next step is to
compute the 4-point DFTs, Gr and Hr , and, for this, we repeat the same procedure by dividing gk and hk into two
2-point sequences corresponding to even- and odd-numbered samples. This is shown in Figure 72.
44
Figure 71: 8-point DFT computed from two 4-point DFTs. Figure 72: 4-point DFTs computed from
two 2-point DFTs. (Note, from (78), that
W4 = W8 2 .)
The next, and final, step is to compute the 2-point DFTs. Note
from (72) that the 2-point DFT (i.e., for N0 = 2) is
F 0 = f0 + f1 ,
F1 = f0 + f1 e−jπ = f0 − f1 ,
and, thus, multiplication in this case is not required. The compu-
tation of the 2-point DFTs is illustrated in Figure 73. Note that,
at this point we have reached the 1-point DFT (i.e., the original
sequence of time-data itself).
To compute all the N0 points of Fr (see (84)) from Gr and Hr
we require N0 complex additions and N0 /2 complex multiplica-
r
tions
W
(corresponding to the products N0 Hr ). To compute the
N0 N0
–point DFT Gr from the –point DFTs we require
Figure 73: 2-point DFTs computed from 1- 2 4
point DFTs (the original time-data sequence). N0 /2 complex additions and N0 /4 complex multiplications, and
the same for Hr . Hence, in the second step there are N0 complex
additions and N0 /2 complex multiplications.
Therefore, the number of computations required remains the
same at each step. Since a total of log2 N0 steps is needed to
arrive at the 1-point DFT (i.e., the original sequence), we re-
quire, conservatively, a total of N0 log2 N0 complex additions
N0
and log2 N0 complex multiplications to compute the N0 –
2
point DFT.
Recall (Page 43) that to compute the DFT from (72) we
need of the order of N0 2 computations. With the FFT we in-
stead need of the order of N0 log2 N0 computations (log2 N0 =
(log10 N0 ) / (log10 2)). The order of the number of computa-
tions required by both methods is illustrated in Figure 74, where
the advantages of the FFT algorithm can be clearly appreciated.
The procedure to obtain the IDFT is identical with Figure 74: Order of number of computations
j 2π 2
WN0 = e N0 and the additional multiplication by 1/N0 required to compute the DFT using (72) (N0 )
(see (73)). and the FFT using (84) (N0 log2 N0 ).
45
Discrete-time signals and systems
When we sample a continuous-time signal (we consider uniformly
spaced discrete instants, . . . , −2T , −T , 0, T , 2T , . . ., kT , . . .,
where T is the sampling interval) we obtain a discrete-time signal:
Discrete-time systems have several advantages over continuous-time systems (precision, stability, duplication,
flexibility, easy to alter, use of IC technology resulting in low power consumption, etc.) and there is, thus, a trend
nowadays towards processing continuous-time signals with discrete-time systems.
The discrete-time impulse δ[k] and a delayed ver- Figure 77: Discrete-time impulse.
sion δ[k − m] are shown in Figure 77.
46
Discrete-time sinusoid
A continuous-time sinusoid cos (ωt) sampled every T seconds yields a discrete-time sinusoid,
f [k] = cos (ωkT ) = cos (Ωk),
where Ω = ωT is the frequency in radians per sample.
47
An example of a discrete-time system
Consider a regular deposit made in a bank account every month. Denote:
We then have:
y[k] = y[k − 1] + r y[k − 1] + f [k] ,
| {z } | {z } | {z } | {z }
current balance previous balance interest deposit
Difference equations
Equation (89) is an example of a difference equation. In general,
Causality condition
For a causal system, the output cannot depend on future input values. So, in Equation (90) we require m ≤ n .
In general, we can write a causal system as:
We can then solve Equation (92) recursively. For example, to determine y[0] we need the values of y[−1],
y[−2], . . . , y[−n] (the initial conditions) and the values of the input f [0], f [−1], f [−2], . . . , f [−n]. We then store
y[0] and, when the next value of the input f [1] becomes available, we can compute y[1] (from the values of y[0],
y[−1], . . . , y[−n + 1] and f [1], f [0], . . . , f [−n + 1]), and so on.
48
Example 3
Find the unit impulse response
( of the system: y[k] − 0.6 y[k − 1] − 0.16 y[k − 2] = 5 f [k].
1 for k = 0,
We let f [k] = δ[k] = and y[k] = h[k]. The difference equation then becomes:
0 for k 6= 0,
h[k] − 0.6 h[k − 1] − 0.16 h[k − 2] = 5 δ[k],
subject to zero initial conditions h[−1] = h[−2] = 0. We thus obtain:
For k = 0: h[0] = 5 δ[0] + 0.6 h[−1] + 0.16 h[−2] = 5 × 1 + 0.6 × 0 + 0.16 × 0 = 5,
For k = 1: h[1] = 5 δ[1] + 0.6 h[0] + 0.16 h[−1] = 5 × 0 + 0.6 × 5 + 0.16 × 0 = 3,
For k = 2: h[2] = 5 δ[2] + 0.6 h[1] + 0.16 h[0] = 5 × 0 + 0.6 × 3 + 0.16 × 5 = 2.6,
For k = 3: h[3] = 5 δ[3] + 0.6 h[2] + 0.16 h[1] = 5 × 0 + 0.6 × 2.6 + 0.16 × 3 = 2.04,
..
.
etc.
The following Matlab
c
script computes the first 11 points of the previous iteration.
The values obtained with the above script are (the first two
values of H are not displayed because they correspond to the
initial conditions—equal to 0—at times k = −2 and k = −1;
so we start from the third value corresponding to time k = 0):
H(3:13)=[5 3 2.6 2.04 1.64
1.3104 1.0486 0.8388]
0.6711 0.5369 0.4295]
The plot of the impulse response is shown in Figure 80.
Figure 80: Example of impulse response.
Operational notation
In difference equations it is convenient to use operational notation. We use the operator E to denote the operation
of advancing a sequence by one time unit:
E f [k] = f [k + 1],
E 2 f [k] = f [k + 2],
..
.
n
E f [k] = f [k + n].
Using this notation, the difference equation (91) can be written as
E n + an−1 E n−1 + . . . + a1 E + a0 y[k] = bn E n + bn−1 E n−1 + . . . + b1 E + b0 f [k],
(93)
| {z } | {z }
Q[E] P [E]
49
or,
Q[E] y[k] = P [E] f [k], (94)
where Q[E] and P [E] are nth order polynomials of the operator E.
or,
y0 [k + n] + an−1 y0 [k + n − 1] + . . . + a1 y0 [k + 1] + a0 y0 [k] = 0. (95b)
To make this equation equal to zero for all values of k, the sequence y0 [k] and its advanced versions have to
have the same form. The function that has this property is the exponential function γ k , since γ k+m = γ m γ k , so
that, advanced γ k is equal to γ k scaled by a constant γ m . Hence, the solution must be of the form y0 [k] = c γ k .
Substituting in (95) we obtain:
c γ n + an−1 γ n−1 + . . . + a1 γ + a0 γ k = 0.
| {z }
Q[γ]
Q[γ] = (γ − γ1 ) (γ − γ2 ) . . . (γ − γn ) = 0.
Therefore, (95) has n possible solutions: c1 γ1 k , c2 γ2 k , . . . , cn γn k , and the general solution is:
y0 [k] = c1 γ1 k + c2 γ2 k + . . . + cn γn k , (97)
where γ1 , γ2 , . . . , γn are the solutions of (96) and c1 , c2 , . . . , cn are arbitrary constants obtained from n auxiliary
conditions (usually, the initial conditions). We have the following commonly used terminology:
50
so, γ1 k is one solution. Consider now the mode y0 [k] = k γ1 k in (95):
Q[E] y0 [k] = E 3 − 3 γ1 E 2 + 3 γ1 2 E − γ1 3 k γ1 k
so, k γ1 k is also a solution! Let us try next (feeling lucky . . . ) with y0 [k] = k 2 γ1 k in (95):
Q[E] y0 [k] = E 3 − 3 γ1 E 2 + 3 γ1 2 E − γ1 3 k 2 γ1 k
System stability
A system is asymptotically stable if the zero-input response approaches zero as k → ∞.
A system is marginally stable if the zero-input response neither approaches zero nor grows without bound as
k → ∞.
A system is unstable if the zero-input response grows without bound as k → ∞.
51
Consider the following facts: Im
k
if |γ| < 1, γ → 0 as k → ∞,
Marginally stable Unstable
if |γ| > 1, |γ|k → ∞ as k → ∞,
if |γ| = 1, |γ|k = 1 for all k. Stable
If we know the unit impulse response h[k] (see Page 48), i.e.,
δ[k] −→ h[k],
Therefore,
∞
X
Figure 82: Signal f [k] represented in terms y[k] = f [m]h[k − m]. (102)
m=−∞
of unit impulse components.
52
The convolution sum
The summation in (102) is the convolution sum of f [k] and h[k], denoted as:
X∞
f [k] ∗ h[k] = f [m]h[k − m].
m=−∞
Causality
In general, we consider inputs to a system that start at k = 0 and are zero before that,
f [k] = 0 for k < 0.
If the system is causal, then h[k] = 0 for k < 0. In this case, for such f [k] and h[k], Equation (102) reduces to:
k
X
y[k] = f [m]h[k − m].
m=0
Example 4
In Figure 83, two discrete-time functions f [k] and g[k] are shown together with the steps of the sliding tape
method and the final result c[k] = f [k] ∗ g[k]. The same convolution of these two sequences can be computed in
Matlab
c
with the following command:
» conv([0 1 2 3 2 1],[1 1 1])
resulting in:
» ans = 0 1 3 6 7 6 3 1
53
0 1 2 3 2 1
1 1 1
0 1 2 3 2 1
1 1 1
0 1 2 3 2 1
0 1 2 3 2 1 1 1 1
1 1 1
0 1 2 3 2 1
0 1 2 3 2 1
1 1 1
0 1 2 3 2 1
1 1 1
0 1 2 3 2 1
1 1 1
0 1 2 3 2 1
1 1 1
0 1 2 3 2 1
1 1 1
Figure 83: Example of discrete-time convolution using the sliding tape method.
where the sequences f [k] and g[k] are N0 –periodic. Note that the circular convolution differs from the regular
(linear) convolution (102) by the facts that the summation is over one period and both sequences are N0 –periodic
(see, e.g., Figure 68), whereas in the linear convolution the summation is from −∞ to ∞ and the sequences are
not periodic (see, e.g., Figure 83). Fortunately, linear convolution can be made equivalent to circular convolution
by padding both sequences with zeros.
Suppose that we want to compute the linear convolution (102) of two finite length sequences f [k] and h[k] of
lengths (= numbers of elements) Nf and Nh , respectively. The linear convolution of these two sequences,
y[k] = f [k] ∗ h[k],
has length N0 = Nf + Nh − 1 (see the width property of the convolution sum on the previous page). Note that if
we pad N0 − Nf (= Nh − 1) zeros at the end of f [k] and N0 − Nh (= Nf − 1) zeros at the end of h[k], then both
54
sequences have now N0 elements and the first period of the circular convolution of the zero-padded f [k] and h[k]
is identical to their linear convolution (since the products of the parts that overlap due to the periodic repetitions
of f [k] and h[k] in the circular convolution are always equal to zero—for example, this can be seen in Figure 68,
if we pad 3 zeros at the end of both, f [k] and g[k], and now perform the circular convolution of the resulting
7-point sequences). So, with the sequences f [k] and h[k] zero-padded conveniently (so that both have now N0
elements) we have that y[k] = f [k] ~ h[k] is an N0 –periodic sequence (since it is the circular convolution of
two N0 –periodic sequences) whose first period is the linear convolution of (unpadded) f [k] and h[k]. According
to (103), the DFT of y[k] = f [k] ~ h[k] is given by:
Yr = Fr Hr .
In conclusion, the procedure to compute the linear convolution (102) of two finite length sequences f [k] and h[k]
of lengths Nf and Nh is:
1. Pad Nh − 1 zeros to f [k] and Nf − 1 zeros to h[k].
2. Find Fr and Hr , the DFTs of the zero-padded sequences f [k] and h[k].
3. Multiply Fr by Hr to obtain Yr .
4. The desired convolution y[k] is the IDFT of Yr .
Example 4 revisited
The following Matlab
c
script computes the convolution calculated in Example 4 above (see Figure 83) using the
DFT computed with the FFT algorithm and their inverse counterparts.
For small length sequences, the direct convolution method (such as the sliding tape method) is faster than the
DFT method. However, for long sequences the DFT method using the FFT algorithm is much faster and far more
efficient. This is due to the fact that the use of the FFT algorithm to compute the DFT reduces the number of
computations dramatically, especially for large N0 (see Figure 74). The method of computing the convolution
using the FFT is known as fast convolution.
Total response
To find the total response of a linear time invariant (LTI) system, we can exploit the linearity of the system and use
superposition. The zero-input response y0 [k], due to the initial conditions of the system, is computed from (97),
(98) and (99), and satisfies Equation (95a) above, that is,
55
We can then see that the total response is the sum of the zero-input and zero-initial-condition responses,
y[k] = y0 [k] + yf [k], since adding (94b) and (95a) we obtain:
Total response y[k] = Expressions (97), (98) and (99) + f [k] ∗ h[k]
| {z } | {z }
zero-input component y0 [k] zero-initial-condition component yf [k]
∞
F [z] = Z {f [k]} =
X
f [k]z −k , (104)
k=−∞
Key properties
Linearity
If f1 [k] ⇐⇒ F1 [z] and f2 [k] ⇐⇒ F2 [z] then, for any constants a1 and a2 ,
56
and that the input starts at time k = 0 and is zero before that. In particular,
f [−1] = f [−2] = . . . = f [−n] = 0. (108)
From (105), (107) and (108) we have, for m = 1, 2, . . . , n,
1
y[k − m] u[k] ⇐⇒ Y [z],
zm
1
f [k − m] u[k] ⇐⇒ m F [z],
z
and, using the linearity property of the Z-transform (see the previous page), we have that the Z-transform of
Equation (106) is given by
an−1 a0 bn−1 b0
1+ + . . . + n Y [z] = bn + + . . . + n F [z].
z z z z
Rearranging the previous equation we obtain,
bn z n + bn−1 z n−1 + . . . + b0
Y [z] = F [z].
z n + an−1 z n−1 + . . . + a0
| {z }
H[z]
Exercise 5
Show, using (104), that the Z-transform of the discrete-time impulse function δ[k] defined in (85) is
Z {δ[k]} = 1.
The result of Exercise 5 and Equation (109) tell us that the transfer function H[z] is the Z-transform of the
impulse response h[k] of the system since, when f [k] = δ[k], F [k] = 1 and, hence, Y [z] = H[z]F [z] = H[z] ×
1 = H[z] .
57
According to (104), the summation on the right-hand term above is the Z-transform of the impulse response
h[k], i.e., the transfer function H[z] of the system (see the previous conclusion, drawn after Exercise 5). Hence,
y[k] = H[k]z k , which can be denoted by a directed arrow representation:
k k
| z{z } −→ | H[z]z
{z }
.
input output
and we conclude that the output to a discrete-time sinusoidal input f [k] = cos (Ωk + θ) is given by:
This result only applies to asymptotically stable systems since the Z-transform we used in its derivation,
X∞
H[z] = h[m]z −m , is valid only for values of z lying in the region of convergence of H[z]. For z = ejΩ , z
m=−∞
lies on the unit circle (|z| = 1) and thus it is not included in the region of convergence for unstable and marginally
stable systems.
Equation (111) says that the response of an asymptotically stable linear discrete-time system to a discrete-
time sinusoidal input of frequency
jΩ Ω is also a discrete-time sinusoid of the same frequency. The amplitude of the
jΩ
output sinusoid is H e
times the amplitude of the input, and the phase of the output is shifted by H e
jΩ
with respect
jΩ to the input’s phase. Therefore, H e encompassing the information of both, amplitude gain
H e and phase shift H ejΩ is the frequency response of the system.
is 2π–periodic. The physical reason for this periodicity is that, as explained on Page 47, discrete-time sinusoids
separated by values of Ω in integral multiples of 2π are identical. Therefore, the system response to such sinusoids
(or exponentials) is also identical. Thus, for discrete-time systems, we need to only plot the frequency response
over the frequency range from −π to π (or from 0 to 2π).
58
Digital filters
Digital filters can be classified as either recursive (or IIR) filters or nonrecursive (or FIR) filters.
b 3 z 3 + b2 z 2 + b1 z + b0
H[z] = . (112)
z 3 + a2 z 2 + a1 z + a0
Working backwards from the transfer function (110) to the difference equation (106) we can see that the input
sequence f [k] and the corresponding output sequence y[k] of this system are related by
The output is therefore determined iteratively (or recursively) from its past values. If we apply an impulse input
δ[k], the impulse response h[k] will continue forever (it propagates itself because of the recursive nature of the
filter) up to k → ∞. For this reason, these filters are also called infinite impulse response (IIR) filters.
b 3 z 3 + b2 z 2 + b1 z + b0 b2 b1 b0
H[z] = 3
= b3 + + 2 + 3 , (114)
z z z z
and the difference equation (113) reduces to
y[k] = b3 f [k] + b2 f [k − 1] + b1 f [k − 2] + b0 f [k − 3] .
| {z }
input terms
Note that y[k] is now computed only from the (present and past) values of the input f [k] (i.e., there is no recur-
sion). If we apply an impulse input f [k] = δ[k] to this system, the impulse response will be
We can see that the impulse input will “pass through” the system and will be “completely out of the system” by
time instant k = 4. Therefore, the duration of of the impulse response h[k] of the filter is finite. For this reason,
these filters are also known as finite impulse response (FIR) filters.
In general, we can identify the coefficients
of the filter with the impulse response values as done before, e.g.,
b3 = h[0], b2 = h[1], . . . , b0 = h[3] and a generic nth order FIR filter impulse response can be expressed as
The transfer function, H[z], is the Z-transform of h[k] (see Page 57). Hence, applying (104) to (115) we obtain
59
∞
h[1] h[2] h[n]
H[z] = Z {h[k]} =
X
h[k]z −k = h[0] + + 2 + ... + n
z z z
k=−∞ (116)
h[0]z n + h[1]z n−1 + h[2]z n−2 + . . . + h[n]
= ,
zn
and the frequency response (see Page 58), H ejΩ = H ejωT , is
Figure 85: Symmetry condition for linear Figure 86: Antisymmetry condition for linear
phase response. phase response.
Suppose now that the impulse response h[k] is symmetric about its center point. That is, h[0] = h[4] and
h[1] = h[3] (see Figure 85). We then have,
H ejωT = e−j2ωT h[0] ej2ωT + e−j2ωT + h[2] + h[1] ejωT + e−jωT
60
The quantity inside the parentheses
jωT in the last expression is real (there is no j term whatsoever), and represents
−j2ωT
the amplitude response H e
since e has amplitude 1 . The phase response is:
H ejωT = −2ωT,
the digital processor H[z] that will make the system on the top part of Figure 87 “equivalent” to a desired analog
system with transfer function Ha (s). We can aim at making the two systems behave similarly in the time domain
or in the frequency domain.
61
∞
X
y(t) = lim T f (mT )ha (t − mT ),
T →0
m=−∞
where h[k] is the impulse response of the digital filter (i.e., the inverse Z-transform of the transfer function H[z],
see Page 57).
For the two systems to be equivalent we require y(kT ) in (118) to be equal to y[k] in (119). Therefore, the
time-domain criterion for equivalence of the two systems is:
that is, h[k], the unit impulse response of system H[z] on the top of Figure 87 must be equal to T times the
samples of ha (t), the unit impulse response of the system Ha (s) on the bottom part of Figure 87, assuming that
T → 0. For this reason, this method is known as the impulse invariance criterion of filter design.
For the two systems of Figure 87 to be equivalent we require y(kT ) in (121) to be equal to y[k] in (122). Thus,
the frequency-domain criterion for equivalence of the two systems is that H esT = Ha (s).
With this criterion we only ensure that the digital filter’s response matches exactly that of the desired analog
filter at the sampling instants. If we want the two responses to match at every value of t we must have T → 0.
Therefore,
lim H esT = Ha (s).
(123)
T →0
62
We then take the Z-transform of this equation:
H[z] = T Z {ha (kT )} , (125)
and this yields the desired discrete-time transfer function H[z].
A systematic procedure to find H[z] given an analog filter desired transfer function Ha (s) consists in ex-
panding Ha (s) in partial fractions, finding the individual impulse responses of each term in the partial fractions
expansion by the inverse Laplace transform (since the Laplace transform of the impulse response is equal to the
transfer function, see Page 22) and then compute the Z-transform of each term sampled (t = kT ) and add all
of them together multiplied by T . The procedure can be simplified by the use of tables (see, e.g., Table 12.1 on
Page 736 of Lathi’s textbook). Also, the Matlab
c
function impinvar solves this problem. The input data are
the coefficients of the numerator and denominator polynomials of Ha s and the sampling interval T . Matlab
c
function impinvar returns the numerator and the denominator polynomial coefficients of the desired digital fil-
ter H[z] (beware though that there is a scaling factor T discrepancy between Matlab
c
’s solution and that of (125),
see Lathi’s book for the details).
Therefore, we can obtain H[z] from Ha (s) by using the bilinear transformation:
2 z−1
s= . (128)
T z+1
A Matlab
c
function to find digital filters by the bilinear transformation method is bilinear. The input data
are the coefficients of the numerator and denominator polynomials of Ha (s) and the sampling frequency in Hz.
Matlab
c
returns the numerator and denominator polynomial coefficients of the desired digital filter H[z].
63
Choice of T in the bilinear transformation method
Since there is no aliasing (of the kind obtained with the impulse invariance method), the only consideration in the
choice of the sampling interval is the maximum frequency of the signals to be processed. If the highest frequency
to be processed is Fh Hz (ωh = 2πFh rad/sec) then, to avoid signal aliasing, we must use [see (66)],
1 π
T ≤ = .
2Fh ωh
Figure 89: Analog filter and corresponding digital filter frequency responses.
64
0
must start with an analog filter H 0 (jω) which has gains g1 , g2 , . . . , gm at frequencies ω10 , ω20 , . . . , ωm , respectively,
where:
2 ωi T
ωi0 = tan , i = 1, 2, . . . , m. (130)
T 2
Application of the bilinear transformation (128) to this filter yields the desired digital filter with gains g1 , g2 , . . . ,
gm at frequencies ω1 , ω2 , . . . , ωm , respectively, since from (129) and (130) we have that the behaviour of the
analog filter at a frequency ωi0 appears in the digital filter at frequency:
0
2 −1 ωi T 2 −1 ωi T
tan = tan tan = ωi .
T 2 T 2
A simplified procedure
The procedure of prewarping followed by the bilinear transformation can be simplified if, instead of using (128)
we use
z−1
s= , (131)
z+1
and, instead of using (130) we use
ωi T
ωi0 = tan , i = 1, 2, . . . , m. (132)
2
The reason this simplification works just as well is that the factors 2/T cancel each other. If we use (131) instead
2
of (128) we obtain ωd = tan−1 ωa instead of (129), hence if we prewarp ωi according to (132) we obtain:
T
2 −1 0 2 −1 ωi T
tan ωi = tan tan = ωi .
T T 2
Suppose we want to design a bandpass filter to satisfy the specifications given on the left plot of Figure 90
(cf. Figure 54). All critical frequencies (ωs1 , ωp1 , ωp2 , ωs2 ) are first prewarped using (132), thus obtaining
0 0 0 0
ωs1 , ωp1 , ωp2 , ωs2 . Next, we design (except for the prewarping of frequencies, the steps to design the analog
filter are identical to those presented on Page 32) the prototype lowpass filter Hp (s) with the specifications shown
on the right plot of Figure 90, where ωs0 is given by:
( )
0 0 0 2 0 2 0 0
ω p1 ωp2 − ωs1 ωs2 − ω p1 ωp2
ωs0 = min 0 0
0 , 0 0
0 . (133)
ωp2 − ωp1 ωs1 ωp2 − ωp1 ωs2
65
We then convert the lowpass prototype filter to the desired analog bandpass filter by replacing s with T (s) in
Hp (s), where
0 0
s2 + ωp1 ωp2
T (s) = 0 0
. (134)
ωp2 − ωp1 s
z−1
Finally, using (131), s is replaced with . The two transformations can be combined into a single one,
z+1
z−1 2 0 0
0 0
z+1
+ ωp1 ωp2 (z − 1)2 + ωp1 ωp2 (z + 1)2
Tbp [z] = T (s) = =
0 0 z−1 0 0
s= z−1
z+1 ωp2 − ωp1 z+1
ωp2 − ωp1 (z + 1)(z − 1)
0 0 0 0 0 0
+ 1 z 2 + 2 ωp1
ωp1 ωp2 ωp2 − 1 z + ωp1 ωp2 +1
= 0 0
.
ωp2 − ωp1 (z 2 − 1)
Therefore,
0 0
ωp1 ωp2 −1
a= 0 0 ,
z 2 + 2az + 1
ωp1 ωp2 + 1
Tbp [z] = , 0
ωp2 0
− ωp1
b (z 2 − 1)
b= 0 0 .
ωp1 ωp2 + 1
Thus, the digital bandpass filter transfer function H[z] can be obtained from the prewarped analog prototype
lowpass filter transfer function Hp (s) by directly replacing s with Tbp [z].
According to the time-domain equivalence criterion (120), for T small enough, we must have h[k] = T ha (kT ),
hence, Z π
T T
h[k] = Ha (jω)ejωkT dω. (135)
2π − Tπ
Windowing
The impulse response found in (135) has in general infinite duration. But, for an FIR filter, h[k] must have a finite
duration and must start at k = 0 for the filter to be causal. Consequently, the h[k] found in (135) needs to be
N0 − 1
truncated using an N0 –point (N0 = n + 1, where n is the order of the filter) window and then delayed by
2
to make it causal (recall that delay introduces linear phase, see Page 15). Straight truncation of data amounts
to using a rectangular window. Although such a window gives the smallest transition band (minimal spectral
spreading, see Figure 32 and the discussions on Page 19), it results in a slowly decaying oscillatory frequency
response in the stopband (due to leakage, see Figure 32 and the discussions on Page 19). The behaviour can be
corrected by using a tapered window (see Table 12.2 on Page 762 of Lathi’s textbook for some window functions
and their characteristics).
Once we know h[0], h[1], h[2], . . . , h[n] (obtained from (135) followed by windowing and delaying) we can
find the transfer function H[z] of the digital filter from (116) and the frequency response, H ejωT , from (117).
66
Nonrecursive filter design by the frequency-domain criterion: The fre-
quency sampling method
The frequency-domain criterion [see (123)], with T small enough, is
Ha (s) = H esT .
We shall realise this equality for real frequencies, that is, for s = jω, so:
Ha (jω) = H ejωT .
(136)
For an nth order filter there are N0 = n+1 elements in h[k] [cf. (115)] and we can hope to force the two frequency
spectra in (136) to be equal only at N0 points. Because the spectral width is 2π/T (see Page 58), we choose these
2π/T
frequencies equally spaced, ω0 = rad/sec apart. That is, the sampling interval of the spectrum is:
N0
2π
ω0 = , (137)
N0 T
and we require
Ha (jrω0 ) = H ejrω0 T ,
r = 0, 1, 2, . . . , N0 − 1. (138)
The problem is now to determine the filter impulse response h[k] from the knowledge of the N0 uniform samples
of Ha (jω), that we can denote Hr :
Hr = Ha (jrω0 ), r = 0, 1, 2, . . . , N0 − 1. (139)
Recall that a digital filter’s transfer function is the Z-transform of its impulse response. For the finite impulse
response sequence h[0], h[1], . . . , h[N0 − 1], we have,
∞
X N
X 0 −1
−k
H[z] = h[k]z = h[k]z −k .
k=−∞ k=0
Recalling (72) we conclude thath[k] and Hr are a DFT pair with Ω0 = ω0 T . Hence, the desired h[k] is the IDFT
of Hr = Ha (jrω0 ) = H e 0 T given by [see (73)]:
jrω
N0 −1 N0 −1
1 X 1 X j 2πrk
h[k] = Hr ejrkω0 T = Hr e N0 , k = 0, 1, . . . , N0 − 1. (140)
N0 r=0 N0 r=0
Thus, we can use the powerful IFFT routine (recall the computational efficiency of the FFT and IFFT algorithms
explained on Page 45 and illustrated in Figure 74) to compute the N0 values of h[k], as in (140), from the problem
data given by the samples of the desired spectrum (139).
67
N0 −1 N0 −1
to multiplying Hr = H ejrω0 T by e−jrΩ0 2 = e−jr 2 ω0 T (see the time-shifting property number 3 of the
N0 −1
h 2π i N0 −1
j r −jrπ N
Hr = H ejrω0 T e−jr 2 ω0 T = H e N0 e
0 , r = 0, 1, 2, . . . , N0 − 1. (141)
The desired impulse response h[k], k = 0, 1, . . . , N0 − 1 is obtained with the IDFT (or IFFT) and is causal and
has a linear phase response.
In obtaining the samples of (141) we can do it for r = 0, 1, . . . , N02−1 , and the ones for r =
N0 +1 N0 +3
2
, 2 , . . . , N0 − 1, can be obtained from the conjugate symmetry property [see (74)]:
Hr = HN∗ 0 −r .
68