Sie sind auf Seite 1von 138

Course: WB3250 Signaalanalyse (2007-2008)

Exercise session 1 - RECAP - CONTINUOUS-TIME SIGNALS

PART 1: recap complex numbers, Euler formula and integrals

Recap - complex numbers, Euler formula



Consider a complex number x = a + jb with a, b ∈ R and j = −1.

Modulus |x| of x: |x| = a2 + b2

b
Argument ∠x of x: ∠x = tan−1

a

The modulus and argument of x allow one to rewrite x in polar form:

x = a + jb = |x| ej∠x

where ejy = cos(y)+jsin(y) (Euler formula). Thus, we have that x = |x| (cos(∠x) + jsin(∠x))
and that Re(x) = a = |x|cos(∠x), Im(x) = b = |x|sin(∠x). The different concepts are illus-
trated in the complex plane in Figure 1.

Im

b .x
|x|
x
a Re

. x

Figure 1: Complex plane

Finally, the conjugate x̄ of x = a + jb = |x|ej∠x is defined as:

x̄ = a − jb = |x| e−j∠x

The conjugate is also represented in Figure 1.

1
Exercises

a. Consider the complex number x = 1 + 2j. Compute |x|, ∠x and x̄.

b. Given two complex numbers x and y, prove that |xy| = |x| |y| and that ∠(xy) =
∠x + ∠y
|x|
c. Given two complex numbers x and y, prove that | xy | = |y|
and that ∠( xy ) = ∠x − ∠y

d. Given x = 1 + 2j and y = 2 + j. Compute xy and xy .

e. Show that x = 3 ejπ is also equal to -3.

f. Show that ej2π and ej10π are both equal to 1.

g. Given x = 3 ejπ and y = 2 ej 2 . Compute xy and xy .


π

h. Show that xx̄ is equal to |x|2 .


1
ejφ + e−jφ for φ ∈ R. Give a similar expression for sin(φ).

i. Show that cos(φ) = 2

Recap - integral
R t=t
In order to compute t=t12 f (t)dt of a function f (t) of the variable t, we need to compute the
primitive Pf (t) of f (t). The primitive Pf (t) is the function such that:

d Pf (t)
= f (t)
dt
Having determined the primitive, the integral can then be computed as follows:
Z t=t2
f (t)dt = [Pf (t)]t=t
t=t1 = Pf (t = t2 ) − Pf (t = t1 )
2

t=t1

Exercises
R 2π
a. Compute 0
cos(t)dt.
R 2π
b Compute 0
ejnt dt with n an integer. Distinguish the case where n = 0 and where
n 6= 0.
R∞
c. Compute 0
ae−bt dt with a, b ∈ R and b > 0.

PART 2: continuous-time signals

A continuous-time signal x(t) is a physical variable as a function of the time t. The signal
x(t) is valued in R for each t ∈ R.

2
Exercise 1. Consider the signals x(t), y(t) and z(t) given in the figure below. Note that
x(t) = 0 and y(t) are zero everywhere expect between t = 0 and t = 1 and that z(t) is zero
everywhere except between t = 0 and t = 2. Express z(t) as a linear combination of shifted
versions of x(t) and y(t).
2
x(t) y(t) z(t)
1 1 1

0 1 t 0 1 t 0 1 2 t

Exercise 2. cfr. book pp. 37, exercise 1.1.i.

Exercise 3. cfr. book pp. 39, exercise 1.4, sub-questions (a) and (b)

The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.

3
SOLUTIONS.
Solution of the exercises on complex numbers.
√ √ 2
a. |x| = 1+4= 5 and ∠x = tan−1 1
= 1.1 rad. x̄ = 1 − 2j.

b. Let us write x and y in polar form:

x = |x| ej∠x y = |y| ej∠y

We can then write that:

xy = |x| ej∠x |y| ej∠y = |x||y| ej(∠x+∠y)

and we see that |xy| = |x||y| and that ∠(xy) = ∠x + ∠y.

c. Let us also write x and y in polar form:

x = |x| ej∠x y = |y| ej∠y

We can then write that:


x |x| ej∠x |x| j(∠x−∠y)
= j∠y
= e
y |y| e |y|
|x|
and we see that | xy | = |y|
and that ∠( xy ) = ∠x − ∠y

d. xy = (1 + 2j)(2 + j) = 2 + 4j + j + 2j 2 = 2 + 5j − 2 where we made use of the fact that


j 2 = −1. Thus, xy = 5j. Now, let us compute x/y

x xȳ (1 + 2j)(2 − j) 2 − j + 4j + 2 4 + 3j 4 3
= = = = = + j
y y ȳ (2 + j)(2 − j) 4 − 2j + 2j + 1 5 5 5

e. x = 3 ejπ can be written as x = 3 (cos(π) + jsin(π)) and thus x = 3(−1 + 0j) = −3.

f. ej2π can be written as

ej2π = 1 ej2π = 1 (cos(2π) + jsin(2π))

which is equal to 1. The same can be said for ej10π . Indeed, ej10π can be written as
1 (cos(10π) + jsin(10π)) which is also equal to 1. In fact, ej2nπ = 1 for all integer n.

g. The product and ration of x and y can e.g. be deduced as follows:



xy = (3 ejπ )(2 ej 2 ) = 6 ej(π+ 2 ) = 6 ej
π π
2 (= −6j)

3 ejπ 3 3 π 3
= ej(π− 2 ) = ej 2 (= j)
π

j π2 2 2 2
2e

4
h. xx̄ = (|x| ej∠x )(|x| e−j∠x ) = |x|2 ej0 = |x|2 .

i. 21 ejφ + e−jφ = 12 (cos(φ) + j sin(φ) + cos(−φ) + j sin(−φ)) = 21 (2cos(φ)) = cos(φ). In




order to find an expression for sin(φ), we use a similar reasoning. Since sin(−φ) = −sin(φ),
we will consider:

ejφ − e−jφ

which is equal to 2j sin(φ). Consequently:


1 jφ  −j jφ
e − e−jφ = e − e−jφ

sin(φ) =
2j 2
Solution of the exercises on integrals.

a. Here f (t) = cos(t) and thus Pf (t) = sin(t). Consequently, the integral is equal to
R 2π
0
cos(t)dt = sin(2π) − sin(0) = 0 − 0 = 0. This is in fact logic since we integrate a cosine
over one of its period.

b. Here f (t) = ejnt . Let us first consider the case n 6= 0. Using Euler formula, we can
rewrite it as: f (t) = ejnt = cos(nt) + jsin(nt) whose primite is given by:

sin(nt) −cos(nt)
Pf (t) = +j
n n
Consequently,
Z 2π    
jnt sin(2πn) −cos(2πn) sin(0) −cos(0) j j
e dt = +j − +j = (0 − ) − (0 − ) = 0
0 n n n n n n

where we have used the fact that n is an integer. Consider now the case n = 0. Then
f (t) = 1 whose primitive if Pf (t) = t. Consequently, when n = 0, the integral is equal to
2π − 0 = 2π.

c. Here f (t) = ae−bt . The primitive is thus: Pf (t) = −a


b
e−bt . Consequently:
Z ∞  ∞
−bt −a −bt −a a
ae dt = e =0− =
0 b 0 b b

Indeed, since b > 0, limt→∞ e−bt = 0.

Solution of the exercises on continuous-time signals.

Exercise 1. z(t) = y(t) + 2x(t − 1) − y(t − 1)



Exercise 2. Denote the triangular pulse of length τ by vτ (t) (i.e. vτ (t) = (1 − 2|t|/τ )pτ (t)).
The signal x(t) in subplot (a) of Figure P1.1 is equal to p4 (t) + p2 (t). The signal x(t) in
subplot (b) is equal to 34 v8 (t) − 31 v2 (t). The signal in subplot (c) is equal to 2 p4 (t) + v4 (t).

5
The one in subplot (d) is p2 (t) − v2 (t). The signal x(t) in subplot (e) is a periodic signal with
period 2. The first period (i.e. from t = 0 till t = 2) is given by p1 (t − 12 ). Consequently, we
have:

X 1
x(t) = p1 (t − + 2k)
2
k=−∞

Exercise 3. The signal x(t) in sub-question (a) is given by:



 x(t) = 1 f or − 1 ≤ t < 1
x(t) = −1 f or 1 ≤ t < 3
x(t) = 0 elsewhere

The signal in sub-question (b) is:



 x(t) = −t f or 0 ≤ t < 1
x(t) = 1 f or 1 ≤ t < 2
x(t) = 0 elsewhere

Conversion table.

THIRD edition SECOND edition


p. 37 exercise 1.1.i p. 47 exercise 1.1.a
p. 39 p. 49

6
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 2:
CONTINUOUS-TIME FOURIER SERIES AND TRANSFORM

The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.

PART 1: Fourier series of (continuous-time) periodic signals

A continuous-time signal x(t) is a physical variable as a function of the time t. The signal
x(t) is valued in R for each t ∈ R.

A signal x(t) is even if x(t) = x(−t) and it is odd if x(t) = −x(−t).

A continuous-time signal x(t) is periodic if and only if we can find T ∈ R such that
x(t) = x(t + T ) for all t. The smallest T having this propriety is called the fundamen-
tal period of x(t). The fundamental frequency ω0 of x(t) is defined as ω0 = 2π
T
with T the
fundamental period. See pp. 5-6 of the book for more details.

Each periodic signal x(t) can be decomposed in a series of harmonics k ω0 of its fundamental
frequency ω0 i.e.
∞ Z T
X 1 2
x(t) = ck ejkω0 t
with ck = x(t) e−jkω0t dt
k=−∞
T −2T

with T the fundamental period of x(t). This decomposition is called the Fourier series of
x(t) and the coefficients ck are called the Fourier coefficients of x(t). This is fact the com-
plex exponential form of the Fourier series. The Fourier series can also be expressed in a
trigonometrical form (see p. 101 of the book for more details).

The power P of a periodic signal x(t) is defined by:


Z T
∆ 1 2
P = x2 (t)dt
T − T2
with T the fundamental period of x(t). The power can also be computed using the Fourier
coefficients (Parseval’s Theorem):
+∞
X
P = |ck |2
k=−∞

Exercise 1 (examination problem January 2006). The continuous-time signal x(t) is


generated according to
3
X
x(t) = bk cos(kω0 t)
k=1

1
with ω0 = π/5 rad/sec, and
2

for k = 1, 3
bk = kπ
0 voor k = 2

a. Is x(t) a periodic signal, and if so, what is its fundamental period?

b. Is x(t) an even or odd function? Motivate your answer.

c. Determine the Fourier series of x(t) in the


 complex exponential form. For this purpose,
use the formula cos(φ) = 12 ejφ + e−jφ .

d. Determine the power of the signal x(t).

Exercise 2 (examination January 2007). In Figure 1, the infinite-time signal x(t) is


depicted over the interval [−10, 10]. This continuous-time signal x(t) is periodic and is
generated as
3
X
x(t) = a0 + ak sin(kω0 t)
k=1


with ω0 = 10
rad/s, a0 = 1, a1 = 2, a2 = 0 and a3 = 12 .
4

1
x(t)

−1

−2

−3

−4
−10 −8 −6 −4 −2 0 2 4 6 8 10
t [sec]

Figure 1: signal x(t) of exercise 2

a. Is x(t) an odd signal, an even signal or neither of the two? Motivate your answer.

b. Determine the fundamental period of x(t) and its fundamental frequency.

c. Determine the Fourier series of x(t) in the complex exponential form.

d. Determine the power of the signal x(t).

2
Exercise 3. Let x(t) = sin(0.4πt), y(t) = cos(1.4πt + π4 ) and z(t) = x(t) + y(t).

a. Is z(t) a periodic signal? If yes, what is its fundamental period Tz ?

b. Prove using the formula cos(φ) = 21 ejφ + e−jφ that z(t) can be written as follows:


z(t) = a−2 e−jω2 t + a−1 e−jω1 t + a1 ejω1 t + a2 ejω2 t

with a1 = 0.5e−j 2 , a−1 = 0.5ej 2 , a2 = 0.5ej 4 , a−2 = 0.5e−j 4 , ω1 = 0.4π and ω2 = 1.4π.
π π π π

c. Prove that the expression in item [b.] is equivalent to the Fourier series of z(t).

d. Compute the power Pz of the periodic signal z(t) using Parseval’s theorem. Verify
that we obtain the same value for the power Pz if we use, instead, the definition of the
power of a signal:

T
1
Z
2

Pz = z 2 (t)dt
T − T2

Note for this purpose that cos(A)cos(B) = 12 (cos(A − B) + cos(A + B)).

Exercise 4 (examination November 2006). Consider the infinite-time signal x(t) de-
picted in Figure 2 in the interval [−40, 40]. The continuous-time signal x(t) is a periodic
signal with a fundamental period of 40 seconds.
2

1.5

0.5
x(t)

−0.5

−1

−1.5

−2
−40 −30 −20 −10 0 10 20 30 40
t [sec]

Figure 2: signal x(t) of exercise 4

a. Is the signal x(t) an even signal or an odd signal? Motivate your answer.

b. What is the fundamental frequency ω0 of the signal x(t)?

3
The complex exponential form of the Fourier series of the periodic signal x(t) is given by:

X
x(t) = ck ejkω0 t
k=−∞

where the fundamental frequency ω0 has been determined in item [b.] and where the Fourier
coefficients ck are given by:

 0 when k = 0
ck =
 2
sin k 2π when k 6= 0


c. Explain based on Figure 2 why c0 = 0


d. Based on the complex exponential form of the Fourier series given above, determine
the parameters ak (k = 0... + ∞), bk (k = 1... + ∞) in the trigonometrical form of the
Fourier series of x(t):
+∞
X
x(t) = a0 + (ak cos(kω0 t) + bk sin(kω0 t))
k=1

e. Determine the power of x(t)


Exercise 5. Consider the periodic signal x(t) represented in the figure below.

a. Compute the Fourier series of x(t). Hint: determine separately c0 and ck for k 6= 0.
b. Check the value of c0 based on the figure representing x(t).
c. Compute the power of the periodic signal x(t) using Parseval’s theorem. Verify that
we obtain the same value for the power if we use, instead, the definition of the power
∆ RT
of a signal: P = T1 −2T x2 (t)dt. Remember for this purpose that, if x ≤ π,
2


X 1
2
sin2 (k x) = 0.5 x (π − x)
k=1
k

4
d. How does the Fourier series change if the signal x(t) is right-shifted by 0.5 T ?

Exercise 6. Consider the periodic signal x(t) given in the figure below.

Compute the Fourier series of x(t) by making use of the results obtained in Exercise 5.

Exercise 7 (examination June 2007). Consider the continuous-time signal x(t) =


e−t cos(10t)u(t) with u(t) the unit step function. Does there exist a Fourier series for the
signal x(t)? Motivate your answer.

PART 2: Continuous-time Fourier transform

The Fourier transform F (x(t)) = X(ω) of a continuous-time signal x(t) describes the fre-
quency content of the signal x(t). X(ω) is a complex function of the frequency ω. It is
defined as:
Z ∞

X(ω) = x(t) e−jωt dt
−∞

The Fourier transform of a given signal x(t) can be computed

• either by applying the definition (thus by evaluating the integral above)

• or by using standard Fourier transform pairs in combination with properties of the


Fourier transforms. A very simple example: if we know that the Fourier transform of
x1 (t) is given by X1 (ω) and that the Fourier transform of x2 (t) is given by X2 (ω), then
the Fourier transform X(ω) of x(t) = x1 (t) + x2 (t) can be deduced by the linearity
property of the Fourier transform i.e. X(ω) = X1 (ω)+X2(ω). Table 3.2 on page 144 of
the book summarizes the Fourier transforms of some common signals. The properties
of the Fourier transform are presented in Section 3.6 of the book and are summarized
in Table 3.1 on page 141.

5
Exercise 1. Consider the signal h(t) defined as follows:

h(t) = 1 if 0 ≤ t < 1
h(t) = 0 elsewhere

a. The rectangular pulse pτ (t) of length τ is defined as:

pτ (t) = 1 if −τ τ

2
≤t< 2 .
pτ (t) = 0 elsewhere

Give an expression of h(t) using pτ (t).

b. Knowing that the Fourier transform of pτ (t) is given by ω2 sin( ωτ


2
) (see page 119 of the
book), compute the Fourier transform H(ω) using what has been found in item [a.]
and the hint below.

c. Which linear operation have we to perform on h(t) to obtain that the Fourier transform
of the resulting signal is real for all ω (i.e. Im(X(ω)) = 0 for all ω)?

d. What is the Fourier transform G(ω) of g(t) = h(t) − h(−t) ? Use a property of the
Fourier transform (see the hint below).

Hints for items [b.] and [c.]: consider a signal x(t) whose Fourier transform is denoted
by X(ω). Then, the Fourier transforms of x(t − c) (c ∈ R) and of x(−t) are given by:

F (x(t − c)) = X(ω) e−jωc expression (3.41) in the book

F (x(−t)) = X(−ω) expression (3.45) in the book

Exercise 2. Let a, ω0 , φ be given constants. Compute the Fourier transform X(ω) of:

a. x(t) = (1 − e−at ) u(t) with u(t) the unit step function (see page 2 of the book).

b. x(t) = sin(ω0 t + φ).

c. x(t) = δ(t − a) with δ(t) the unit impulse.

d. x(t) = e−at sin(ω0 t) u(t).

Hint: recall that, in Table 3.2 of the book, we see that

6
F (u(t)) = 1/(jω) + πδ(ω)

F (sin(ω0 t)) = jπ (δ(ω + ω0 ) − δ(ω − ω0 ))

F (δ(t)) = 1

Recall furthermore for item [d.] that, if x(t) = y(t)sin(ω0 t), then the Fourier transform
X(ω) of x(t) is given by: X(ω) = (j/2) (Y (ω + ω0 ) − Y (ω − ω0 )) with Y (ω) the Fourier
transform of y(t). This is property (3.51) of the book.

Exercise 3.
a. Prove that the Fourier transform of x(−t) is given by X(−ω). Prove subsequently that
X̄(ω) = X(−ω) for real-valued x(t). Finally, show that Im(X(ω)) = 0 ∀ω for even
signals and Re(X(ω)) = 0 ∀ω for odd signals.

b. Prove the property (3.50) of the book

c. Prove the property (3.52) of the book

Exercise 4. Consider the signal x(t) depicted below:

The Fourier series of the signal x(t) is:



X
x(t) = ak ejkω0 t
k=−∞

Vτ V
with a0 = T
, ak = kπ
sin( kπτ
T
) and ω0 = 2π/T . See exercise 5 of PART 1 of this session.

a. Compute the Fourier transform X(ω) of x(t) from its Fourier series. Recall for this
purpose that the Fourier transform of ejωo t is given by 2πδ(ω − ω0 ). This is property
(3.74) in the book.

7
b. Compute the Fourier transform G(ω) of g(t) = x(t) pT (t) with pT (t) the rectangular
pulse of length T (see item [a.] of exercise 1). The signal g(t) corresponds to one of
the periods of x(t).

8
SOLUTIONS.
SOLUTIONS: PART 1

Exercise 1.

1.a. The signal x(t) is made up of the sum of two cosines (since b2 = 0). Since these two
cosine functions are integer multiples of ω0 , we can directly conclude that x(t) is periodic
and that the fundamental frequency is ω0 . The fundamental period is then 2π/ω0 = 10 sec.

We can also make the full reasoning. For this purpose, we also start by the fact that x(t)
is made up of the sum of two cosines. The fundamental period of the first cosine is T1 = ω2π0 .

The fundamental period of the second cosine is T2 = 3ω 0
. The signal x(t) is then periodic if
and only if (cfr. pp. 5-6 of the book) T1 /T2 can be written as a ratio q/r of two integers.
Here, x(t) is indeed periodic since

T1 3
= .
T2 1
Since 3 and 1 are two coprime integers, the fundamental period T of x(t) is T = T1 = 3T2 ..
Consequently, T = 2π
ω0
= 10 s since ω0 = π/5. The result can be verified as follows:

π 3π
x(t + 10) = b1 cos( (t + 10)) +b3 cos( (t + 10)) = x(t)
| 5 {z } | 5 {z }
=cos( π
5
t) =cos( 3π
5
t)

The above equation shows that x(t) is periodic of period T = 10s and T is the fundamental
period because is the smallest number for which the above equation holds.

1.b. x(t) is an even function. Because cos(t) = cos(−t) for all t, it follows that x(t) = x(−t).

1.c. The signal x(t) is given by:

x(t) = b1 cos(ω0 t) + b3 cos(3ω0 t)

with b1 = 2/π and b3 = 2/(3π). It is asked to determine the Fourier series of x(t) in its
complex exponential form i.e. to rewrite x(t) as a summation of complex exponentials at
harmonics of its fundamental frequency ω0 :

X
x(t) = ck ejkω0t
k=−∞

To determine the Fourier coefficients in this expansion, use can be made of the definition
RT
ck = T1 −2T x(t) e−jkω0t dt. However, here, the use of the definition is not at all neces-
2
sary since x(t) is already given in the form of a summation of cosines at harmonics of the

9
fundamental frequency ω0 of x(t)1 . Consequently, in order to determine the Fourier co-
efficients, we will justrewrite the cosines into complex exponentials using Euler formula:
cos(φ) = 12 ejφ + e−jφ . This delivers:

b1 jω0 t  b3 j3ω0 t
+ e−jω0 t + + e−j3ω0 t

x(t) = b1 cos(ω0 t) + b3 cos(3ω0 t) = e e
2 2
b3 −j3ω0 t b1 −jω0 t b1 jω0 t b3 j3ω0 t
= e + e + e + e
2 2 2 2
The latter expression is the Fourier series of x(t) (exponential form). It is indeed in the form
X∞
x(t) = ck ejkω0 t with the following values for the Fourier coefficients ck :
k=−∞

b1 1
c1 = c−1 = =
2 π

b3 1
c3 = c−3 = =
2 3π

ck = 0 for all other values of k

1.d. Using Parseval’s theorem, The power of x(t) is given by


3
X 1 1 20
|ck |2 = 2( 2
+ 2) = 2.
k=−3
π 9π 9π

Exercise 2.

2.a. Due to the constant term a0 , the signal is neither even nor odd since
3
X 3
X
x(−t) = a0 + ak sin(kω0 (−t)) = a0 − ak sin(kω0 t)
k=1 k=1

is neither equal to x(t) nor to −x(t).

2.b. Since x(t) is made up of harmonics of ω0 . The fundamental frequency is thus equal to
ω0 = 2π
10
rad/s. The fundamental period is thus given by: T = 2πω0
= 10 seconds. That x(t)
has indeed a period of 10 seconds can also be checked in Figure 1.

2.c. The signal x(t) can be rewritten as:


1
i.e. x(t) is already given in the trigonometrical form of the Fourier series.

10
3
X
x(t) = a0 + ak sin(kω0 t)
k=1
3
X ak
ejkω0t − e−jkω0t 1
ejφ − e−jφ
 
= a0 + since sin(φ) = 2j
k=1
2j

The Fourier series of a signal of fundamental frequency ω0 is defined as



X
x(t) = ck ejkω0t
k=−∞

with ck the Fourier coefficients. Comparing the two expressions, we can see that the Fourier
coefficients ck are given by c−3 = −a2j
3
= 4j , c3 = a2j3 = −j
4
, c−2 = c2 = 0, c−1 = −a
2j
1
= j,
a1
c1 = 2j = −j, c0 = a0 = 1 and ck = 0 for all other k.

2.d. Using Parseval’s theorem, we have that the power P is:



X 1 1
P = |ck |2 = ( )2 + 0 + 1 + 1 + 1 + 0 + ( )2 = 3.125
4 4
k=−∞

Exercise 3.

3.a. The fundamental period Tx of x(t) is the smallest number for which x(t + Tx ) = x(t).
−1 −1
This number is given by Tx = ω2πx = 0.4π

= 5. The period of y(t) is equal to 10/7.
The signal z(t) = x(t) + y(t) is then periodic if and only if (cfr. pp. 9 of the book) Tx /Ty
can be written as a ratio q/r of two integers. Here, z(t) is indeed periodic since

Tx 35 7
= = .
Ty 10 2

Since 7 and 2 are two coprime integers, the fundamental period of z(t) is Tz = 2Tx = 7Ty =
10s. The result can be verified as follows:

z(t + Tz ) = x(t + 2Tx ) + y(t + 7Ty ) = z(t)


| {z } | {z }
=x(t) =y(t)

The above equation shows that z(t) is periodic of period Tz and Tz is the fundamental period
because is the smallest number for which the above equation holds.

3.b. The signal z(t) is equal to cos(ω1 t − 0.5π) + cos(ω2 t + 0.25π). Consequently, z(t) can
be rewritten as:

z(t) = 0.5 ej(ω1 t−0.5π) + e−j(ω1 t−0.5π) + ej(ω2 t+0.25π) + e−j(ω2 t+0.25π)


11
which delivers the result.

3.c. The Fourier series of a periodic signal consists of rewriting this signal as:

X
z(t) = ck ejkω0t
k=−∞

where ω0 = 2π
T
denotes the fundamental frequency of the signal and T its fundamental pe-
riod. In item [a.], we have shown that z(t) has a fundamental period equal to T = 10.
Consequently, ω0 is here equal to 2π/T = 0.2π. Note that ω1 = 2ω0 and ω2 = 7ω0 .

Using the expression of z(t) proposed in item [b.]Pand the relations between ω1 , ω2 and
ω0 , the coefficients ck of the Fourier series of z(t) = ∞k=−∞ ck e
jkω0 t
can be read by inspec-
tion: c2 = a1 , c−2 = a−1 , c7 = a2 , c−7 = a−2 and ck = 0 for any other values of k.

3.d. Parseval’s theorem states that Pz = ∞ 2


P
k=−∞ |ck | . Consequently, Pz = 1. We obtain
the same result via the definition. To show that, first note that

z 2 (t) = cos2 (ω1 t − 0.5π) + cos2 (ω2 t + 0.25π) + 2 cos(ω1 t − 0.5π) cos(ω2 t + 0.25π)
= 0.5 + 0.5 cos(2ω1 t − π) + 0.5 + 0.5 cos(2ω2 t + 0.5π) + cos((ω2 − ω1 )t + 0.75π) + cos((ω1 + ω2 )t − 0.25π)

Consequently,
T
1 1
Z
2
z 2 (t)dt = (0.5 + 0.5) T = 1
T − T2 T
since 2ω1 , 2ω2 , ω2 − ω1 and ω1 + ω2 are all harmonics of ω0 = 2π/T .

Exercise 4.

4.a. The signal x(t) is an even signal since the signal is symmetric with respect to the y-axis
i.e. x(t) = x(−t).

4.b. The fundamental frequency ω0 can be derived from the fundamental period T as
ω0 = 2π
T
= 2π
40
π
= 20 rad/s.

4.c. That c0 = 0 is logical since the average over time of the signal is equal to 0. The pe-
riodic signal x(t) indeed oscillates around 0 with the same area above and under the zero-line.

4.d. First we observe that ck = c−k . Thus the complex Fourier series can be rewritten as:

=0 +∞
z}|{ X
e−jkω0 t + ejkω0t

x(t) = c0 + ck
k=1
+∞
X
= ck (2 cos(kω0 t))
k=1

12
Consequently, the coefficients bk = 0 ∀k and the coefficients ak = 2ck for k > 1 and a0 = 0.

4.e. The power is


T
20
1 1
Z Z
2
2
x (t)dt = dt = 1
T − T2 40 −20

Exercise 5.

5.a. In order to determine the Fourier series of x(t), we need to compute its Fourier coeffi-
cients ck . For this purpose, observe first that the fundamental period of x(t) is T and thus
that its fundamental frequency is ω0 = 2π/T .

First, let us compute the Fourier coefficient c0 :


T τ
1 1 Vτ
Z Z
2 2
c0 = x(t)dt = V dt =
T − T2 T − τ2 T

Let us now compute the Fourier coefficient ck for an arbitrary k 6= 0:

T
1
Z
2
ck = x(t)e−jkω0 t dt
T − T2
τ
V
Z
2
= e−jkω0t dt
T − τ2

−V
e−jkω0 2 − ejkω0 2
τ τ
=
(jkω0 )T

Using now the fact that ω0 = 2π T


and that, for each φ ∈ R, ejφ − e−jφ = 2j sin(φ) (see
item [i.] of the exercise on complex numbers in session 1), we obtain that, for k 6= 0,
 
V kπτ
ck = sin
kπ T

The Fourier series expansion of x(t) is then simply:



X
x(t) = ck ejkω0t
k=−∞

5.b. We have shown that c0 = V τ /T . This value can also be deduced from the figure
representing x(t) since c0 is, by definition, the mean of the periodic signal (over one of its
period). Here, this mean can be determined by dividing the area of the pulse (i.e. V τ ) by
the length T of one period.

13
5.c. Using the definition of the power, we easily obtain P = (V 2 τ )/T . This power can also
be computed using Parseval’s theorem.


X ∞
X
2
P = |ck | = c20 + c2k + c2−k
k=−∞ k=1

V 2τ 2 2V 2
 
X
2 kπτ
= + sin
T2 k=1
k 2π2 T
V 2τ 2 V 2τ  πτ 
= + π−
T2 πT T
2
V τ
=
T
P∞ jkω0 t
5.d. Since x(t) = k=−∞ ck e ,


X
x(t − 0.5 T ) = ck ejkω0 (t−0.5 T)

k=−∞

X
= c̃k ejkω0 t
k=−∞

with c̃k = ck e−jkπ = ck cos(−kπ) = ck cos(kπ) = (−1)k . Consequently, x(t − 0.5 T ) has a
Fourier series with coefficients c̃k and the same fundamental pulsation ω0 as x(t).

Exercise 6. Denote by x5 (t) the signal considered in Exercise 5 with τ = 0.5 T . Conse-
quently,

X
x5 (t) = ak ejkω0 t
k=−∞


with ω0 = T
, a0 = V /2 and, for k 6= 0,
 
V kπ
ak = sin
kπ 2

The signal x(t) considered in this exercise can be written as x(t) = 2 x5 (t − T4 ) − V . Let

us first compute the Fourier series of 2 x5 (t − T4 ) = k=−∞ c̃k ejkω0 t for which the Fourier
P
coefficients are thus denoted c̃k . Using the Fourier series of x5 (t) given above, we first obtain:

T X
ak ejkω0 (t− 4 )
T
2 x5 (t − ) = 2
4
k=−∞

Consequently, we see that:

c̃0 = V

14
c̃k = 2 ak e−jkω0 4 = 2 ak e−jk 2
T π

2V
sin(kπ/2) e−jk 2
π
=

2V 1 jk π
e 2 − e−jk 2 e−jk 2
π π
=
kπ 2j
V
= −j (1 − cos(kπ))

−j 2V

for odd k
= kπ
0 for even k

Now using that x(t) = 2 x5 (t − T4 ) − V , we obtain the following for the Fourier series of x(t):


X
x(t) = c̃k ejkω0t − V
k=−∞

X
= ck ejkω0t
k=−∞

The Fourier coefficients ck of x(t) are equal to c̃k for all k 6= 0 while the coefficient c0 is equal
to 0. We see that x(t) has no dc-component as would be expected by taking a look at the
figure representing x(t) that clearly shows that x(t) is a zero-mean signal. Note also that,
as a consequence of the fact that x(t) is an odd signal, all Fourier coefficients ck of x(t) are
imaginary.

Exercise 4. No, since x(t) is not periodic. The Fourier series of a signal only exists if the
signal is periodic.

SOLUTIONS: PART II

Exercise 1.

1.a. By choosing τ = 1, we see that h(t) = p1 (t − 0.5).

1.b. By posing τ = 1 in the expression of the Fourier transform of pτ (t), we obtain that the
Fourier transform of p1 (t) is ω2 sin( ω2 ). Consequently, using property (3.41) in the book,

2 ω 1
sin( ) e−j 2 = j e−jω − 1
ω 
H(ω) =
ω 2 ω
1.c Left-shifting h(t) by 0.5 delivers p1 (t) which has a real Fourier transform.

1.d The Fourier transform of h(−t) is, by (3.45), equal to H(−ω). Consequently,

G(ω) = H(ω) − H(−ω)

15
1 1
e−jω − 1 − j ejω − 1
 
= j
ω −ω
2
= j (cos(ω) − 1)
ω

Exercise 2.

2.a. Let us decompose x(t) into two parts: x1 (t) = u(t) and x2 (t) = e−at u(t) such that
x(t) = x1 (t) − x2 (t). The generalized Fourier transform X1 (ω) of x1 (t) is 1/(jω) + πδ(ω).
The Fourier transform of x2 (t) can be deduced as follows:
Z ∞ Z ∞  −(a+jω)t ∞
−at −jωt −(a+jω)t −e 1
X2 (ω) = e e dt = e dt = =
0 0 a + jω 0 a + jω

X(ω) is then equal to X1 (ω) − X2 (ω).


 
2.b We observe that x(t) = y t + ωφ0 with y(t) = sin(ω0 t). Consequently, using (3.41),

jω ωφ
X(ω) = Y (ω) e 0

jω ωφ
= jπ (δ(ω + ω0 ) − δ(ω − ω0 )) e 0

= jπ e−jφ δ(ω + ω0 ) − jπejφ δ(ω − ω0 )

where the last equality follows from the fact that, for a functions f (ω), we have that
f (ω)δ(ω − ω0 ) = f (ω0 )δ(ω − ω0 ).

2.c We observe that x(t) = y(t − a) with y(t) = δ(t). Consequently, X(ω) = Y (ω)e−jωa =
e−jωa since Y (ω) = 1.

2.d. Let us denote e−at u(t) by y(t). Then, by (3.51), we obtain:


j
X(ω) = (Y (ω + ω0 ) − Y (ω − ω0 ))
2
In item (a), we have shown that Y (ω) = 1/(a + jω). Thus,

 
j 1 1
X(ω) = −
2 j(ω + ω0 ) + a j(ω − ω0 ) + a
ω0
=
(a + jω)2 + ω02

Exercise 3.

3.a. The Fourier transform of x(−t) is given by:

16
Z ∞
F (x(−t)) = x(−t) e−jωt dt
−∞

By posing λ = −t, we obtain the result


Z −∞ Z ∞ Z ∞

F (x(−t)) = − x(λ) ejωλ
dλ = x(λ) ejωλ
dλ = x(λ) e−j (−ω)λ
dλ = X(−ω)
∞ −∞ −∞

R∞ ∆
The conjugate X̄(ω) of X(ω) is equal to −∞
x̄(t) ejωt dt = X(−ω) since x̄(t) = x(t) for
real-valued signals.

Consequently, for even signals (i.e. such that x(t) = x(−t)), we have X(ω) = X(−ω) =
X̄(ω). This implies that Im(X(ω)) = 0 ∀ω. For odd signals (i.e. such that x(t) = −x(−t)),
we have X(ω) = −X(−ω) = −X̄(ω). This implies that Re(X(ω)) = 0 ∀ω.

3.b.
Z ∞ Z ∞
∆ −jωt ∆
F (x(t) ejω0 t
)= jω0 t
x(t) e e dt = x(t) e−j(ω−ω0 )t dt = X(ω − ω0 )
−∞ −∞

3.c. First note that x(t) cos(ω0t) = 21 x(t) (ejω0 t + e−jω0 t ). The result then follows from two
applications of the property proven in item (b).

Exercise 4.

4.a. Using property (3.74), we see that the Fourier transform X(ω) of the periodic signal
x(t) is given by:

X
X(ω) = 2π ak δ(ω − kω0 )
k=−∞

4.b. The signal g(t) can be rewritten as g(t) = V pτ (t). Using the fact that the Fourier
transform P (ω) of pτ (t) is (2/ω)sin( τ2ω ), the Fourier transform G(ω) of g(t) is:

G(ω) = V P (ω)
2V τω
= sin( )
ω 2

Conversion table.

17
THIRD edition SECOND edition
p. 2 p. 6
p. 5-6 p. 8-10
p. 101 p. 156
p. 119 p. 173
Table 3.1, p. 141 Table 4.1, p. 189
Table 3.2, p. 144 Table 4.2, p. 192
Section 3.6 Section 4.4
(3.41) (4.46)
(3.45) (4.50)
(3.50) (4.55)
(3.51) (4.56)
(3.52) (4.57)
(3.74) (4.79)

18
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 3: FOURIER TRANSFORM AND FILTERING (A)

Most of the mechanical systems can be accurately modeled by a set of differential equations
relating the output y(t) and the input u(t) of the system. In order to simulate the model,
the differential equations have to be solved. This is often complicated. The theory of the
Fourier transform allows to get insights in the behaviour of the modeled system without
having to solve the differential equations.

An important tool for this purpose is the frequency response H(ω) of the system. To
determine H(ω), we apply property (3.53) of the Fourier transform to the differential equa-
tion(s). This delivers an expression of the Fourier transform Y (ω) of the output y(t) as a
linear function of the Fourier transform X(ω) of the input x(t). Then, H(ω) is just:
Y (ω)
H(ω) =
X(ω)

For example, suppose that a system is described by the differential equation dy(t)
dt
+ k y(t) =
x(t) (k ∈ R). This equation can be rewritten using (3.53) as jωY (ω)+kY (ω) = X(ω). Y (ω)
is thus equal to the following function of X(ω): Y (ω) = (1/(jω + k))X(ω). The frequency
response H(ω) of the system is thus: 1/(jω + k).

The frequency response H(ω) is a very important quantity when we are interested to know
the behaviour of a system. And this for two main reasons.

• The frequency response H(ω) allows to determine the (steady-state) response y(t) of
the system when the input x(t) is given by x(t) = Acos(ω0 t + θ) (−∞ ≤ t ≤ +∞).
The response is then

y(t) = A|H(ω0)|cos(ω0 t + θ + ∠H(ω0 )) expression (5.11) of the book

The amplitude of x(t) is multiplied by the modulus of H(ω) evaluated at the frequency
of x(t) i.e. ω = ω0 . The phase of x(t) is shifted with the argument of H(ω) at ω = ω0 .
See Section 5.1 for more details. The result also holds for sine functions.
• More generally, the relation Y (ω) = H(ω)X(ω) can also be used to compute the
response y(t) for any given x(t). For this purpose, determine the Fourier transform
X(ω) of x(t). With X(ω), determine Y (ω) by multiplying H(ω) and X(ω): Y (ω) =
H(ω)X(ω). The output signal y(t) can then be determined by applying the inverse
Fourier transform on Y (ω) (see (3.38)). This methodology is equivalent to solving
differential equations via the Laplace transform. Indeed, the Laplace variable s is here
just replaced by jω.

Another important quantity is the inverse Fourier transform h(t) of H(ω) i.e. h(t) =
F −1 (H(ω)). The signal h(t) is called the impulse response of the system. From the impulse

1
response, it can be determined whether the system is stable and/or causal. The system is
indeed stable if and only if h(t) tends to 0 when t tends to +∞. The system is causal if and
only if h(t) = 0 for all t < 0. These properties come from the fact that the output y(t) of a
system to an input x(t) can also be expressed as the convolution of h(t) with x(t):
Z ∞

y(t) = h(t) ∗ x(t) = h(λ) x(t − λ) dλ
−∞

Remark. (Inverse) Fourier transforms can be computed via their respective definition (3.30)
and (3.38). Instead, they can be deduced via standard Fourier transform pairs (see Table
3.2 on the page 144) in combination with properties of the Fourier transforms (see Table 3.1
on page 141).

The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.

Bode Diagram
40

30

20

10
Magnitude (dB)

−10

−20

−30

−40

−50

−60
0

−45
Phase (deg)

−90

−135

−180
−2 −1 0 1
10 10 10 10

, Frequency (rad/sec)

Figure 1: Upper plot: |H(ω)|. Bottom plot: ∠H(ω)

Exercise 1 - Examination 27 January 2006. Consider the mass-spring-damper system


given by:

d2 y(t) dy(t)
10 2
+ 0.1 + y(t) = x(t)
dt dt
where y(t) is the position and x(t) the force.

a. Determine the frequency response H(ω) of the mass-spring-damper as well as its mod-
ulus |H(ω)| and argument ∠H(ω)

2
40

20

x
−20

−40
0 5 10 15 20 25 30 35 40
t

40

20

y
−20

−40
0 5 10 15 20 25 30 35 40
, t

Figure 2: Upper plot: x(t). Bottom plot: y(t)

By evaluating |H(ω)| and ∠H(ω) at all frequencies in the interval [0.01 10], we have
obtained the graph (Bode plot) presented in Figure 1. Suppose now that the force x(t) is
periodic and given by:

x(t) = sin(0.3161 t) + 20sin(10 t)

The force x(t) is represented in the upper part of Figure 2 in the interval [0 40]. This system
has been simulated with that periodic signal x(t) and the corresponding output signal y(t)
is represented in the bottom part of Figure 2.

b. Explain the shape of y(t).

v(t)

0 + u(t) +
C G
- y(t)

Figure 3: Closed-loop

Exercise 2. Consider the closed-loop system depicted in Figure 3 where y(t) is the controlled
output, u(t) the command signal (and thus not the unit step function), v(t) the disturbance,
C the controller and G the plant. The frequency responses of the controller and of the plant
are given by:
1 10
C(ω) = jω
and G(ω) = 0.1 jω+1

3
a. Determine the frequency response S(ω) such that Y (ω) = S(ω)V (ω) with V (ω), Y (ω)
the Fourier transforms of v(t) and y(t), respectively.

The modulus and argument of the frequency response S(ω) found in item [a.] are repre-
sented in the Bode plot of Figure 4.
10

−10

Magnitude (dB)
−20

−30

−40

−50

−60

−70

90
Phase (deg)

45

, 10
−2 −1
10
0
10
1
10
2
10
3
10

Figure 4: Upper plot: |S(ω)|. Bottom plot: ∠S(ω)

Suppose now that the disturbance v(t) is the periodic signal given below and represented
for t = 0...200 in the upper part of Figure 5.

v(t) = sin(0.05 t) + 0.1 sin(100 t)

The system has been simulated with that periodic v(t) and the corresponding output signal
y(t) is represented in the bottom part of Figure 5.

b. Explain the shape of y(t)?

Exercise 3. Consider the simplified rolling mill depicted in Figure 6. This rolling mill is
made up of two rolls rotating at a speed ω0 of 2 rad/s (i.e. the rotation time is 3.1 s.).
At the output of the mill, the thickness of the steel plate has to be equal to 1 mm. In
order to obtain this constant output thickness, a feedback loop such as in Figure 7 is used.
In this feedback loop, the controlled variable y(t) is the output thickness while the control
variable u(t) is the position of the rolls. The controller C has as objective to keep the output
thickness constant and thus to compensate any disturbance v(t). In a rolling mill, the main
disturbance v(t) consists of the effect of the eccentricity of the rolls on the output thickness.
The eccentricity is a generic term embedding any imperfection of the rolls e.g. the fact that
the rolls are not perfectly round.

4
1.5

0.5

v
−0.5

−1

−1.5
0 20 40 60 80 100 120 140 160 180 200
t

1.5

0.5

0
y

−0.5

−1

−1.5
0 20 40 60 80 100 120 140 160 180 200
, t

Figure 5: Upper plot: v(t). Bottom plot: y(t)

ω0
u(t)
. y(t)

ω0
,

Figure 6: Rolling mill

v(t)
reference
for y(t)
=1 mm + u(t) +
C G
- y(t)

Figure 7: Closed loop

5
A classical model for the eccentricity v(t) is:

v(t) = 0.01 (sin(ω0 t) + 0.8sin(2ω0 t) + 0.1sin(3ω0 t)) [mm]

a. Why is v(t) modeled as a periodic signal with fundamental frequency ω0 (i.e. the roll
speed)

If we denote by G(ω) and C(ω) the frequency responses of the plant and of the controller,
the Fourier transform Yv (ω) of the part of the output which is due to the disturbance is
given by:
Yv (ω) = (1 + C(ω)G(ω))−1 V (ω)
| {z }
=S(ω)

where V (ω) is the Fourier transform of v(t). S(ω) is called the sensitivity function in control
theory.

b. In Figure 8, three candidate frequency responses S(ω) are proposed. Which sensitivity
function is the best able to compensate the eccentricity v(t)? Explain why?

1
10
Modulus of S (ω)
1

0
10

−1
10

−2
10
−3 −2 −1 0 1 2
10 10 10 10 10 10
1 ω
10
Modulus of S (ω)
2

0
10

−1
10

−2
10
−3 −2 −1 0 1 2
10 10 10 10 10 10
1 ω
10
Modulus of S (ω)
3

0
10

−1
10

−2
10
−3 −2 −1 0 1 2
10 10 10 10 10 10
, ω

Figure 8: Candidates for S(ω)

Exercise 4. cfr. book pp. 263, exercise 5.1, sub-questions (a), (b).

Exercise 5 (examination problem August 2006). A radio device receives a continuous-


time signal x(t) given by:

x(t) = x1 (t)cos(2ωB t) + x2 (t)cos(4ωB t)

6
X 1(ω) X 2(ω)
1 1

−ω Β 0 ωΒ ω −ωΒ 0 ωΒ ω

Figure 9: X1 (ω) and X2 (ω)

The frequency ωB is given. The signals x1 (t) and x2 (t) are speech signals coming from two
different radio channels. Both signals x1 (t) and x2 (t) are band-limited with a bandwidth
equal to ωB (the same ωB as above!). Thus, the Fourier transforms X1 (ω) and X2 (ω) of
x1 (t) and x2 (t) are such that X1 (ω) = X2 (ω) = 0 for all |ω| > ωB . We will furthermore
suppose that:

• both X1 (ω) and X2 (ω) are entirely real i.e. their imaginary parts are equal to zero for
all ω

• X1 (ω = 0) = X2 (ω = 0) = 1

• X1 (ω) and X2 (ω) have the shapes given in Figure 9.

a. Give an expression of the Fourier transform X(ω) of x(t) as a function of X1 (ω), X2 (ω)
and ωB and represent X(ω) in a graph. For this purpose, use the property of multi-
plication by a cosine of the Fourier Transform in Table 3.1 on page 141 of the book.

We would like to reconstruct the information signal x1 (t) from the received signal x(t) using
a two-step procedure:

Step 1: we generate a signal y(t) by multiplying x(t) by cos(2ωB t):

y(t) = x(t)cos(2ωB t)

Step 2: we generate a signal z(t) by filtering y(t) obtained in Step 1 with a filter whose
frequency response H(ω) is given by:

2 if − ωB < ω < ωB
H(ω) =
0 elsewhere

b. Give an expression of the Fourier transform Y (ω) of y(t) as a function of X1 (ω), X2 (ω)
and ωB and represent Y (ω) in a graph. For this purpose, develop y(t) using the
trigonometric formula: cos(A)cos(B) = 21 (cos(A − B) + cos(A + B)).

7
c. Show that z(t) = x1 (t).
d. Suppose that ωB = 25000rad/s. Could we have used the same two-step procedure to
reconstruct x1 (t) if the signal x(t) was perturbed by the electricity network at 50 Hz,
i.e. if the signal received by the radio was given by:

xbis (t) = x1 (t)cos(2ωB t) + x2 (t)cos(4ωB t) + cos(100πt)

Note that the procedure presented above is similar to the methodology used in your radio
at home when you listen to AM-channels (AM= amplitude modulation). At the end of the
solution of this exercise, more technical explanations are given.

Exercise 6 (examination problem August 2006). Consider the continuous-time


Fourier Transform
e−jω
H(ω) =
jω + 2
a. Compute the continuous-time signal h(t) which has H(ω) as Fourier Transform. Use
property (3.41) of the Fourier transform.

Suppose that the signal h(t) found in item [a.] is the impulse response of a filter.

b. Is this filter stable?


c. What is the frequency response of this filter?
d. Given x(t) = δ(t − 2). Compute the signal y(t) which is obtained by filtering the signal
x(t) with this filter.
e. Same question as in item [d.] but now for x(t) = e−t u(t). Use the hint below.
Hint (partial fraction decomposition): We can decompose the frequency function
ajω+b
(jω+c)(jω+d)
(a, b, c, d ∈ R) as follows:
ajω + b α β b − ac b − ad
= − with α = and β =
(jω + c)(jω + d) jω + c jω + d d−c d−c
Indeed, the right-hand side term can be rewritten as follows:
αjω + αd − βjω − βc (α − β)jω + αd − βc
=
(jω + c)(jω + d) (jω + c)(jω + d)
ajω+b
The latter expression is equal to .
Consequently,
(jω+c)(jω+d)

a=α−β
ajω + b = (α − β)jω + αd − βc =⇒
b = αd − βc
b − ac b − ad
Solving this system of two equations for α and β delivers α = and β = .
d−c d−c
Exercise 7. Consider the following filter H(ω) = (jω)/(b + jω) with b > 0.

8
a. What is the impulse response h(t) of that filter? Use property (3.53) of the Fourier
transform and the fact that the du(t)
dt
= δ(t) with u(t) the unit step function and δ(t)
the unit impulse.

b. What is the signal y(t) that is obtained by filtering the input x(t) = e−at u(t) with
a > 0 by H(ω) ?

9
Solutions.
Exercise 1.

1.a. We use (3.53) to rewrite the differential equation as follows:

10 (jω)2 Y (ω) + 0.1 (jω)Y (ω) + Y (ω) = X(ω)

with X(ω) and Y (ω), the Fourier transforms of x(t) and y(t), respectively. The frequency
response H(ω) of the system is thus given by:

Y (ω) 0.1
H(ω) = = 2
X(ω) (jω) + 0.01 (jω) + 0.1
0.1
= 2
(0.1 − ω ) + 0.01ω j
The modulus and argument of H(ω) can thus be computed for each ω:
0.1
|H(ω)| = p
(0.1 − ω 2 )2 + (0.01ω)2

 
−1 0.01ω
∠H(ω) = − tan
(0.1 − ω 2 )
1.b. The force x(t) is made up of two frequencies from which one is the resonance fre-
quency 0.3161 that can be seen in Figure 1. At this frequency, we read from the Bode
30
plot that H(0.3161) = |H(0.3161)|ej∠H(0.3161) ≈ 31 e−j 2 (Indeed 30dB = 10 20 ≈ 31).
π

These values can of course also be deduced by filling in ω = 0.3161 in the expressions of
the modulus and argument found in item [a.]. At the second frequencies ω = 10 within
1
e−jπ (Indeed −60dB = 10 20 = 0.001). Consequently,
−60
x(t), we have H(ω = 10) ≈ 1000
y(t) = 31sin(0.3161t − π2 ) + 0.02sin(10 t − π). In this expression, the sinus at ω = 10 is invis-
ible in the figure of y(t) due to its very small amplitude with respect to the one at ω = 0.3161.

Exercise 2.

2.a. If we denote by V (ω), U(ω) and Y (ω) the Fourier transforms of v(t), u(t) and y(t),
respectively, we see in Figure 3 that Y (ω) = V (ω) + G(ω)U(ω) = V (ω) + G(ω)(−C(ω))Y (ω)
and thus that (1+C(ω)G(ω))Y (ω) = V (ω). Consequently, the ratio between Y (ω) and V (ω)
is given by the following frequency response called the sensitivity function of the closed-loop
system:

1 jω (0.1 jω + 1)
S(ω) = =
1 + C(ω)G(ω) 0.1 (jω)2 + jω + 10
jω (0.1 jω + 1)
=
(10 − 0.1ω 2) + jω

10
2.b. From the expression of |S(ω)|, we note that |S(ω = 0)| = 0. We observe the same
phenomenon in the Bode plot of S(ω): we indeed see that |S(ω)| is going to 0 with a slope
of 20 dB by decade when ω → 0. The modulus of |S(ω)| is equal to 0.005 (i.e. -45 dB)
at the frequency ω = 0.05. The argument of S(ω) at ω = 0.05 is ≈ π2 . At ω = 100, the
modulus of the frequency response S(ω) is ≈ 1 and its argument is ≈ 0. Consequently,
y(t) ≈ 0.1sin(100 t) + 0.005sin(0.05t + π2 ). The second sine function has an amplitude 20
times smaller than the first one and is thus invisible in the figure representing y(t).

Remark. The frequency response S(ω) is a typical sensitivity function in feedback control
where the goal is to attenuate disturbances having a frequency content smaller than the
chosen bandwidth of the closed-loop system. The bandwidth ωB is here equal to ω = 0.8
and we have observed that the disturbance at frequency 0.05 << ωB is (almost) completely
rejected while the disturbance at ω = 100 >> ωB remains unchanged.

Exercise 3.

3.a. The disturbance due to the imperfections of the rolls is periodic since the same distur-
bance comes back at each rotation of the roll. Consequently, the fundamental frequency of
v(t) should be equal to the roll speed ω0 = 2rad/s.

3.b. The modulus of S1 (ω) is equal to 1 for ω = ω0 , ω = 2ω0 and ω = 3ω0 . Consequently,
the first controller keeps the effect of the eccentricity unchanged. The modulus of S2 (ω) is
very small at ω = ω0 , but is equal to 1 for ω = 2ω0 and ω = 3ω0 . Consequently, the second
controller almost completely removes the first harmonics of v(t), but keeps unchanged the
two other harmonics. The modulus of S3 (ω) is very small at ω = ω0 , ω = 2ω0 and ω = 3ω0 .
Consequently, the third controller almost completely removes all harmonics of v(t) and thus
the whole signal v(t). The third controller is thus the best controller to remove the eccen-
tricity.

Exercise 4. The output y(t) to the filter for the sub-question (a) is P y(t) = 3cos(3t) −
5sin(6t−30). The output y(t) corresponding to sub-question (b) is y(t) = 3k=1 (1/k)cos(2kt).

Exercise 5.

5.a. Using the property (3.52) of the book, we obtain that


1
X(ω) = (X1 (ω − 2ωB ) + X1 (ω + 2ωB ) + X2 (ω − 4ωB ) + X2 (ω + 4ωB ))
2
This delivers the graph of X(ω) given in Figure 10 (the symbol w stands for ω).

5.b. Using the proposed trigonometric formula, we obtain successively:

y(t) = x(t)cos(2ωB t)
= x1 (t)cos2 (2ωB t) + x2 (t)cos(4ωB t)cos(2ωB t)

11
X(w)
1

0.5

w
-4wB -2wB wB 2wB 3wB 4wB 5wB

Figure 10: X(ω)

1
= (x1 (t) + x1 (t)cos(4ωB t) + x2 (t)cos(2ωB t) + x2 (t)cos(6ωB t))
2
The Fourier Transform Y (ω) is thus given by
1 1
Y (ω) = X1 (ω)+ (X1 (ω − 4ωB ) + X1 (ω + 4ωB ) + X2 (ω − 2ωB ) + X2 (ω + 2ωB ) + X2 (ω − 6ωB ) + X2 (ω + 6ωB ))
2 4

and can thus be represented as in Figure 11.

Y(w) 1

0.5

0.25

w
-6wB -4wB -2wB wB 2wB 3wB 4wB 5wB 6wB 7wB

Figure 11: Y (ω)

5.c. Filtering y(t) by the proposed filter, we obtain a signal z(t) whose Fourier Transform
Z(ω) is given by Z(ω) = H(ω)Y (ω). Taking a look at the representation of Y (ω), we see
that Z(ω) is then precisely equal to X1 (ω) and thus z(t) = x1 (t).

5.d. The same procedure can be applied without problem to retrieve x1 (t) from xbis (t). In-
deed, ybis (t) = y(t) + cos(100πt)cos(2ωB t) = y(t) + 12 (cos(2ωB − 100π) + cos(2ωB + 100π)) .
The frequencies 2ωB − 100π and 2ωB + 100π being both larger than ωB , these two cosines
will be removed when filtering y(t) by H(ω).

12
Remark: amplitude modulation. This exercise is about the amplitude modulation (AM)
technique in radio transmission. Each AM radio channel is characterized by a frequency (The
Dutch Radio 1 for example by 547 kHz). Suppose that a particular radio characterized by
a frequency ω1 wish to transmit a speech signal x1 (t). Note that a speech signal is band-
limited with a bandwidth of ωB = ±25000 rad/s. What is important to realize is that the
radio channel does not directly transmit x1 (t) in the air: it transmits a signal x1 (t)cos(ω1 t)
where ω1 is the frequency characterizing the radio channel. We see that, in this signal, the
cosine at frequency ω1 (i.e. the so-called carrier) has an amplitude which varies in the time
and which is equal to the speech signal. This explains the term amplitude modulation. Each
radio channel does that at its own characteristic frequency. Consequently, our radio device
receives a signal which is very similar to the signal x(t) in this exercise where ω1 = 2ωB
and ω2 = 4ωb represents the characteristic frequencies of two different radio channels. When
the signal x(t) is received, the procedure presented in the exercise is followed in order to
retrieve the speech signal of one of the radio stations. Now, why do we need to multiply the
speech signal by a carrier at a characteristic frequency in the first place? In fact, if x1 (t) and
x2 (t) would be send as such in the air, the received signal would be x(t) = x1 (t) + x2 (t) and
it would be impossible to separate those two signals by filtering since they lie in the same
frequency region. The fact that each radio channel modulates their speech signal by carriers
at different frequencies makes it possible that the received signal contains non-distorted ver-
sion of the frequency information of the speech signals of all radio stations. This frequency
content is just located in another frequency range. This is evidenced in Figure 10 where we
see that the frequency information of the speech signals of the two different radio channels
(the triangle and the half circle) are received without any distortion: they are just shifted
toward an higher frequency range i.e. around the characteristic frequency of each of the
radio channel1 . Since the frequency information of both x1 (t) and x2 (t) is unharmed, it is
possible to retrieve one of these speech signals by following the procedure presented in this
exercise.

Exercise 6.

6.a. The Fourier Transform H(ω) can be written as H(ω) = Z(ω)e−jω with Z(ω) =
1/(jω + 2). The inverse Fourier Transform of Z(ω) is z(t) = e−2t u(t) with u(t) the unit
step function. Since H(ω) = Z(ω)e−jω , h(t) is then h(t) = z(t − 1) (shift in time property
(3.41)). Consequently, h(t) = e−2(t−1) u(t − 1).

6.b. Since the impulse response h(t) decays to 0 when t → ∞, the filter is stable.

6.c. The frequency response is by definition the Fourier Transform of the impulse response.
The frequency response of the filter is thus H(ω).

6.d. The response of a linear filter to x(t) = δ(t) is by definition the impulse response h(t)
of the filter. Indeed, since F (δ(t)) = 1, the Fourier transform of the response is H(ω) whose
inverse Fourier transform is the impulse response h(t). Consequently, when x(t) = δ(t − 2),
1
For this to be possible, it is thus important that the difference between the characteristic frequencies of
two channels is larger than 2ωB .

13
the output y(t) = h(t − 2) = e−2t+6 u(t − 3).

6.e. The Fourier Transform Y (ω) of y(t) is given by:

e−jω e−jω
Y (ω) = X(ω) =
jω + 2 (jω + 2)(jω + 1)

with X(ω) the Fourier transform of x(t). The Fourier transform Y (ω) can now be separated
as follows:
 
−jω 1 1
Y (ω) = e −
jω + 1 jω + 2
| {z }

=V (ω)

Using the shift-in-time property of Fourier Transform, we see that y(t) = v(t−1) where v(t) is
the inverse Fourier Transform of V (ω). The signal v(t) is here equal to v(t) = (e−t −e−2t )u(t).
Thus, y(t) = (e−t+1 − e−2t+2 )u(t − 1).

Exercise 7.

7.a. The impulse response of a system (or a filter) is the inverse Fourier transform of its
frequency response. To compute the inverse Fourier transform of H(ω), we notice that
1
H(ω) = (jω)
jω + b
| {z }
Z(ω)

where Z(ω) is the Fourier transform of z(t) = e−bt u(t). Using now the property (3.53) of
the book, we conclude that the impulse response h(t) (i.e. the inverse Fourier transform of
H(ω)) is the derivative of z(t) with respect to time:

dz(t) du(t)
h(t) = = −be−bt u(t) + e−bt = −be−bt u(t) + e−bt δ(t)
dt dt | {z }
=δ(t)

7.b The Fourier transform Y (ω) of y(t) is


!
−a b
jω b−a b−a
Y (ω) = = +
(b + jω)(a + jω) a + jω b + jω

1
be−bt − ae−at u(t)

=⇒ y(t) =
b−a

Conversion table.

14
THIRD edition SECOND edition
p. 263 p. 238
Table 3.1, p. 141 Table 4.1, p. 189
Table 3.2, p. 144 Table 4.2, p. 192
Section 3.6 Section 4.4
Section 5.1 Section 5.1
(3.30) (4.36)
(3.38) (4.43)
(3.41) (4.46)
(3.52) (4.57)
(3.53) (4.58)
(5.11) (5.15)

15
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 3: FOURIER TRANSFORM AND FILTERING (B)

Ideal vs. Non-ideal filters.

Exercise 8. Consider the following signal:


1 1
x(t) = xphys (t) + cos(38t) + cos(42t)
4 4
The signal x(t) is represented in Figure 1. This signal is the measurement of a physical
variable xphys (t) which is in this situation taken equal to:
1
xphys (t) = cos(2t)
2
As can be seen, the measurement is perturbed by two sinusoids one at ω = 38 rad/s and
one at ω = 42 rad/s. We wish to filter away these two perturbing sinusoids.
1

0.8

0.6

0.4

0.2
x(t)

−0.2

−0.4

−0.6

−0.8

−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time [s]

Figure 1: Signal x(t) of Exercise 8

a. Does the following ideal filter achieve this objective:



1 for −6 < ω < 6
Hideal (ω) =
0 elsewhere

b. Why is this filter generally not implementable in practice?

Since the ideal filter is not implementable in practice, we will consider implementable alter-
natives for the filter Hideal (ω). A simple alternative is to choose the filter as a Butterworth
filter. Such a filter can be generated by the function butter of Matlab. A Butterworth filter
is a low-pass filter with a certain cut-off frequency ωcut (in this exercise, as for Hideal (ω), the
cut-off frequency will be chosen to 6 rad/s). Besides its cut-off frequency, another degree of
freedom of a Butterworth filter is its order N (i.e. its complexity). For ωcut = 6, the filters
of order N = 1, 2 and 3 are:

1
0
N=1, 2, 3
−20

Magnitude (dB)
−40

−60

−80

−100

−120

0
N=1, 2, 3
−45
Phase (deg)

−90

−135

−180

−225

−270
0 1 2
10 10 10

Figure 2: Bode plot of the Butterworth filters of order N = 1, N = 2 and N = 3 (cut-off


frequency ωcut = 6 rad/s)

1
H1 (ω) = jω
6
+1

1
H2 (ω) = 1 8.485
36
(jω)2 + 36
jω +1

1
H3 (ω) = 1 12 72
216
(jω)3 + 216
(jω)2 + 216
jω +1

The frequency response of these three filters are represented in a Bode plot in Figure 2.

We have filtered x(t) with each of these three filters. Let us denote by yi (t) (i = 1, 2, 3), the
output obtained by filtering x(t) by Hi (ω) (i = 1, 2, 3). The three outputs are represented
together with the desired output 0.5cos(2t) in Table 1.

c. Based on the expression of H1 (ω), show that the Butterworth filter is indeed imple-
mentable in practice.

d. Explain the shape of the outputs in Table 1. Use for this purpose Figure 2.

2
0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1
0.5 cos(2 t)

y(t)
0 0

−0.1 −0.1

−0.2 −0.2

−0.3 −0.3

−0.4 −0.4

−0.5 −0.5

0 1 2 3 4 5 6 0 1 2 3 4 5 6
time [s] time [s]

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1
y(t)

y(t)
0 0

−0.1 −0.1

−0.2 −0.2

−0.3 −0.3

−0.4 −0.4

−0.5 −0.5

0 1 2 3 4 5 6 0 1 2 3 4 5 6
time [s] time [s]

Table 1: Top left: desired output 0.5cos(2πt); Top right: y1 (t); Bottom left: y2 (t); Bottom
right: y3 (t)

Exercise 9. Consider the infinite-time signal x(t) depicted in Figure 3 in the interval
[−40, 40]. The continuous-time signal x(t) is a periodic signal with a fundamental fre-
π
quency ω0 = 20 rad/s. We have shown in Exercise 4 of PART 1 of Session 2 that x(t) can
be written as:
+∞
X
x(t) = ak cos(kω0 t)
k=1

with 
 0 when k = 0
ak =
4 k π

sin when k 6= 0

kπ 2

a. Determine a value for A and a value for ωcut in the expression of the ideal filter H(ω):

A if − ωcut ≤ ω ≤ ωcut
H(ω) =
0 elsewhere

3
2

1.5

0.5

x(t)
0

−0.5

−1

−1.5

−2
−40 −30 −20 −10 0 10 20 30 40
t [sec]

Figure 3: signal x(t) of Exercise 9.

in such a way that, if the signal x(t) is filtered by this filter H(ω), the output signal
ydes (t) is equal to ydes (t) = π4 cos( 20
π 4
t) − 3π cos( 3π
20
t). This output signal is represented
in the left part of Table 2.

Since the ideal filter designed in item [a.] is not implementable in practice, we decide to
perform the filtering operation with a Butterworth filter1 with a cut-off frequency equal to
4ω0 rad/s. The order of the filter has been chosen to N = 1 and the resulting output is
represented in dashed line in the right part of Table 2.

b. Justify the choice of the cut-off frequency based on what has been found in item [a.]

c. Explain why the obtained output when N = 1 is more similar to the input x(t) than
to the desired output ydes (t).

In order to obtain an output closer to ydes (t), we have increased the order of the But-
terworth filter to N = 10. This filter yields the output represented in solid line in the right
part of Table 2.

d. Explain why the obtained output when N = 10 is now much more similar to the desired
output ydes (t).

1
See Exercise 8 for more details on Butterworth filter.

4
1.5 1.5

1 1

0.5 0.5

(t)
(t)

butter
des

0 0
y

y
−0.5 −0.5

−1 −1

−1.5 −1.5
−40 −30 −20 −10 0 10 20 30 40 −40 −30 −20 −10 0 10 20 30 40
time [s] time [s]

Table 2: Left: desired output ydes (t); Right: ybutter (t) when N = 1 (black dashed) and when
N = 10 (blue solid)

SOLUTIONS
Exercise 8.

8.a. Since the frequencies ω = 38 rad/s and ω = 42 rad/s are larger than 6 rad/s and since
the frequency ω = 2 rad/s is smaller than 6 rad/s, the output of the ideal filter will indeed
be 12 cos(2t).

8.b. First, note that the filtering operation is always done in the time domain. Now, it
is shown in the book that the impulse response h(t) of the ideal filter is nonzero for t < 0
(see page 241). The filter is therefore non-causal and thus that, to compute the value of the
output at time t = 10 s using e.g. the convolution integral h(t) ∗ x(t), we not only need
the values of the signal x(t) for t ≤ 10, but also all the values for t > 10. Since x(t) is the
measurement of a physical variable, the values of x(t) at time t > 10 are not available at
time t = 10. Consequently, the ideal filter is not implementable in practice.

8.c. The impulse response h1 (t) of the filter H1 is the inverse Fourier transform of H1 (ω).
Using Table 3.1, we see that h1 (t) = 6e−6t u(t) which is equal to 0 for t < 0. The filter is
thus causal and can thus be implemented in practice.

Remark. If, as usual, the measurement x(t) is under the form of a voltage, the filtering
operation H1 (ω) can be easily done using the analog circuit represented in Figure 4 provided
that RC = 61 . Indeed, the frequency response of this circuit is 1/(1 + jRCω).

8.d. Recall first that for i = 1, 2, 3:


1 1 1
yi (t) = |Hi(2)|cos(2t+∠Hi (2))+ |Hi (38)|cos(38t+∠Hi(38))+ |Hi (42)|cos(42t+∠Hi(42))
2 4 4
We observe that the output y1 (t) of the Butterworth filter with N = 1 contains still

5
R

x(t) C y(t)

Figure 4: Analog circuit equivalent to H1 (ω)

significant contribution at ω = 38 rad/s and at ω = 42 rad/s. Indeed, even though as re-


quired the modulus of H1 (ω = 2) is approximately 1, the modulus of H1 (ω) at ω = 38 rad/s
and at ω = 42 rad/s are not sufficiently small to make the contribution of these harmonics
negligeable.

As opposed to this situation, when N is chosen equal to 3, the perturbations at ω = 38


rad/s and at ω = 42 rad/s have disappeared from the output y3 (t). The modulus of H3 (ω)
at ω = 38 rad/s and at ω = 42 rad/s are indeed much smaller than it was the case for H1 (ω).
We have also that |H3 (ω = 2)| ≈ 1, but ∠H3 (2) ≈ −40 deg = −0.7 rad/s. Consequently,
the filtering away of the perturbations has to be paid by a phase shift:

y3 (t) = cos(2t − 0.7)

By inspecting Figure 2, we observe the following phenomena:

• the higher N, the closer the modulus of HN (ω) is to the modulus of Hideal (ω)

• the higher N, the larger the phase shift in the bandwidth of the filter (i.e. for the
frequencies ω such that −ωcut < ω < ωcut )

The case of N = 2 perfectly illustrates this rule: the phase-shift is smaller than for N = 3,
but larger than for N = 1, but the influence of the perturbations is larger than for N = 3,
but smaller than for N = 1.

The choice of N should thus always be subject of a trade-off between phase shifts and
filtering of the perturbations.

Exercise 9.

9.a. Rewriting the first terms of the Fourier series of x(t), we obtain:
4 4 4
x(t) = cos(ω0 t) + 0 − cos(3ω0 t) + 0 + cos(5ω0 t) + ...
π 3π 5π

6
π
with ω0 = 20 rad/s. Consequently, in order to get ydes (t) = π4 cos(ω0 t) − 3π
4
cos(3ω0 t) as the
output signal of the filter H(ω), we have to choose A = 1 and a possible choice for ωcut is

20
rad/s. Since a4 = 0, the value of ωB can in fact be chosen in the following interval:

3π 5π
≤ ωcut <
20 20

9.b. The cut-off frequency of the Butterworth filter can be chosen equal to the cut-off fre-
quency of the ideal filter. Consequently ωcut = 4ω0 = 4π
20
is a reasonable choice.

9.c. The obtained output when N = 1 is more similar to the input x(t) than to the desired
output ydes (t) because the Butterworth filter of order 1 does not allow to reduce the har-
monics above 4ω0 sufficiently to significantly change the shape of x(t) (see Figure 2).

9.d. Unlike when N = 1, the Butterworth filter of order 10 allows to reject almost completely
the harmonics above 4ω0 . Therefore, the obtained output when N = 10 is more similar to
the desired output ydes (t). However, a Butterworth filter of order 10 introduces a (large)
phase shift of the harmonics in the bandwidth i.e. ω0 and 3ω0 and this phase shift is different
for the harmonic in ω0 and for the harmonic in 3ω0 . This explains the difference of shape
between ydes (t) and ybutter (t).

7
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 3: FOURIER TRANSFORM AND FILTERING (C)

Exercise 10. Tracking with flexible transmission.

x(t) y(t)

Figure 1: Flexible transmission system

Problem description. Consider the flexible transmission system represented in Figure 1 con-
sisting of three horizontal pulleys connected by two elastic belts1 . The first pulley is driven
by a DC motor. The system input x(t) is the angular position of the first pulley and the
output y(t) is the angular position of the third pulley. The modulus |H(ω)| of the frequency
response of this flexible transmission system is represented in Figure 2. In this Figure, we
observe, as expected, two flexible modes.

2
10

1
10
|H(ω)|

0
10

−1
10

−2
10
−2 −1 0 1
10 10 10 10
ω

Figure 2: |H(ω)|
1
Reference: Ion Landau et al. European Journal of Control 1:77-96, 1995.

1
Our objective is to determine the input signal x(t) that has to be applied to the system
to force the output y(t) to follow the periodical pattern that is represented in Figure 3. This
periodical pattern has a fundamental period of 60 s and is very close to a block signal of
amplitude 45 degrees. However, the transition between −45 and 45 is here not instantaneous
as in a block signal but is done in 2.25 s.

The signal yr (t) can be expressed as the following Fourier series expansion (trigonomet-
rical form):

X
yr (t) = bk sin(kω0 t)
k=1
π
with ω0 = 30
rad/s. The Fourier coefficients bk are only nonzero for odd k. The absolute
value of these Fourier coefficients are represented at the top of Table 1. In this table, the
coefficients are represented on the left side from k = 0 till k = 120 while, on the right side,
we zoom on the coefficients from k = 40 till k = 120.
60

40

20
yr(t) [deg]

−20

−40

−60
0 10 20 30 40 50 60 70 80 90 100
time [s]

Figure 3: Desired output yr (t)

Open-loop approach. As a first attempt to force y(t) = yr (t), we decide to apply an input
x(t) equal to yr (t). This could seem logical since, if we turn the first pulley with 45 degrees,
the third pulley will (eventually) also turn with 45 degrees.

We have done the experiment and the achieved output y(t) is represented in Figure 4
where we observe that y(t) has a quite similar shape as yr (t), but we also observe that
undesirable and possibly damaging oscillations occur. In Figure 5, we zoom on one of these
oscillations.
a. Give a rough estimate of the frequency ωoscillation of these oscillations.
b. Does H(ω = ωoscillation ) correspond to something in particular?
c. Explain now why the shape of the output y(t) in Figure 4. For this purpose, use can
be made of the middle of Table 1 where we represent |bk ||H(kω0)|.

2
60

40

20

y(t) [deg]
0

−20

−40

−60
0 10 20 30 40 50 60 70 80 90 100 110
time [s]

Figure 4: Achieved output y(t) when x(t) = yr (t)


60

40

20
y(t) [deg]

−20

−40

−60
2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5
time [s]

Figure 5: Zoom of y(t)

Closed-loop approach. A possible way to remove these oscillations would be to replace the
belts by new belts with more stiffness. Suppose that this is not possible here and that,
instead, we have decided to design the input x(t) using a feedback controller2 C which
will compute x(t) based on the difference yr (t) − y(t) between the desired output yr (t) and
(a measurement of) the actual output y(t). This closed-loop configuration is depicted in
Figure 6. The flexible transmission system in closed loop can be seen as a system with an
input yr (t) and an output y(t). The frequency response of this system is defined as follows:

Y (ω)
T (ω) =
Yr (ω)

with Yr (ω) and Y (ω), the Fourier transforms of yr (t) and y(t), respectively.

2
This controller has been designed using the H∞ framework. Reference: Ferreres-Fromion, Proc. Euro-
pean Control Conference, 1997.

3
x(t) y(t)

-
+
controller y r(t)

Figure 6: Flexible transmission system in closed-loop configuration


1
10

0
10
|T(ω)|

−1
10

−2
10

−3
10
−2 −1 0 1
10 10 10 10
ω

Figure 7: |T (ω)|

d. Determine the expression of T (ω) as a function of H(ω) and C(ω) i.e. the frequency
responses of the flexible transmission system and of the controller, respectively.

The modulus |T (ω)| of T (ω) is represented in Figure 7.

We have run an experiment in the closed-loop configuration and we obtain an output


y(t) as depicted in Figure 8. In Figure 9, we give a detail of y(t) and we compare y(t) with
the desired yr (t).

e. Is the output y(t) obtained in the closed-loop configuration a better image of yr (t)
than it was the case in the open-loop configuration (see Figure 4).

f. Explain the observation made in item [e.]. For this purpose, use can be made of the
bottom of Table 1 where we represent |bk ||T (kω0 )|.

4
60

40

20

y(t) [deg]
0

−20

−40

−60
0 10 20 30 40 50 60 70 80 90 100 110
time [s]

Figure 8: Output y(t) obtained in the closed-loop configuration


60

40

20
y(t) [deg]

−20

−40

−60
28 29 30 31 32 33 34 35
time [s]

Figure 9: Detail of the output y(t) obtained in the closed-loop configuration (red solid)
compared to the same detail of yr (t) (blue dotted)

Suppose now that the measurement of the output y(t) is achieved using an electrical sensor
and that the electrical network induces a measurement error v(t) = sin(100πt):

ymeasured (t) 6= y(t)


= y(t) + sin(100πt)

This means that the signal entering into the controller is not yr (t)−y(t), but yr (t)−y(t)−v(t).

h. Knowing that T (ω = 100π) ≈ 0.0001, can you deduce whether this measurement error
will induce a (significant) change in the shape of the actual output y(t) of the system?

5
SOLUTIONS
Exercise 10.

10.a. In Figure 4, we see that the oscillation is a damped sinusoid (a sinusoid multiplied
by e−αt for some α > 0). Now, by looking at Figure 5, we observe that this sinusoid has a
period of approximatively 0.8 s. This leads to a frequency of:
2π 5
ωoscillation ≈ = π ≈ 8 rad/s
0.8 2

10.b. By inspection of Figure 2, we see that this frequency ωoscillation corresponds to the
frequency of the first resonance peak of H(ω).

10.c. Let us first determine the output y(t). For this purpose, let us note that, since
x(t) = yr (t), we have that

X
x(t) = bk sin(k ω0 t)
k=1

Consequently, using formula (5.11), we obtain that:



X
y(t) = |H(kω0 )| bk sin(k ω0 t + ∠H(kω0 ))
k=1

Consequently, we see that bk |H(kω0)| represents the amplitude of the harmonic at kω0 in
the output. We can therefore explain the shape of y(t) as follows:

• The occurrence of oscillations at ω = 8 rad/s in y(t) can be explained as follows. First


note that the harmonics of x(t) at frequencies around ω = 8 (i.e. around k = 75)
have negligeable amplitudes (see the top of Table 1). However, due to the fact that
H(ω) >> 1 at those frequencies (|H(ω = 8)| ≈ 15 (see Figure 2)), the harmonics of
y(t) at those frequencies are no longer negligeable and induces the damped sinusoidal
oscillation at a frequency ω = 83 . This is confirmed by comparing the top and the
middle part of Table 1 where we see that |bk ||H(kω0)| >> |bk | around k = 75. Note
also that the second resonance peak is too small to induce any significant oscillation.

• That y(t) and x(t) have a quite similar shape can also be explained. Note for this
purpose that bk is significant up to k = 19 i.e that the main components of x(t) are
sinusoids at frequencies smaller than 19ω0 ≈ 2 rad/s (see the top of Table 1). Note also
that |H(ω)| ≈ 1 for ω < 2 and that consequently, bk |H(kω0)| ≈ bk for k ≤ 19. This
is confirmed by comparing the top and the middle part of Table 1 which are similar
up to k = 19. Since the harmonics up to k = 19 are the most significant harmonics of
3
In fact, the damping is due to the summation of different harmonics with different phases and with
frequencies around ω = 8.

6
x(t) and that the extra harmonics of y(t) around ω = 8 are limited in amplitude since
the resonance peak is limited (the stiffness of the elastic belts is large), the shapes of
y(t) and x(t) = yr (t) are quite similar.

10.d. In Figure 6, we see that:

X(ω) = C(ω) (Yr (ω) − Y (ω))


with X(ω), Yr (ω) and Y (ω), the Fourier transforms of x(t), yr (t) and y(t), respectively. Now,
we have also that Y (ω) = H(ω)X(ω). This yields:

Y (ω) = H(ω)X(ω) = H(ω)C(ω) (Yr (ω) − Y (ω))

=⇒ (1 + H(ω)C(ω)) Y (ω) = H(ω)C(ω)Yr (ω)

Y (ω) H(ω)C(ω)
=⇒ T (ω) = =
Yr (ω) 1 + H(ω)C(ω)
T is the so-called complementary sensitivity function.

10.e. Yes, the output y(t) obtained in closed loop is a better image of yr (t) than the output
obtained in open loop. The undesirable oscillations have disappeared. In fact, as can be
seen in Figure 9, the only remaining discrepancy is a very short delay of approximately one
second and smoothed edges when the signal reaches ±45degrees.

10.f. Following a similar reasoning as in item [c.], we first deduce an expression for the
output y(t) in closed loop:

X
y(t) = |T (kω0 )| bk sin(k ω0 t + ∠T (kω0 ))
k=1

Consequently, we see that bk |T (kω0)| represents the amplitude of the harmonic at kω0 in the
output. We can therefore explain the shape of y(t) as follows:
• The fact that the oscillations have disappeared can be explained by the fact that,
unlike H(ω), T (ω) does not present any resonance peak with important amplitude.
• Similarly as in the open-loop case, we have also to note for this purpose that bk is
significant up to k = 19 i.e that the main components of yr (t) are sinusoids at frequen-
cies smaller than 19ω0 ≈ 2 rad/s (see the top of Table 1). Note also that |T (ω)| ≈ 1
for ω < 2 and that consequently, bk |T (kω0)| ≈ bk for k ≤ 19. This is confirmed the
top and the bottom part of Table 1 which are similar up to k = 19. Consequently,
the significant harmonics of y(t) and yr (t) are approximately equal which explains the
almost perfect match between y(t) and yr (t). In fact, since |T (ω)| < 1 for all ω > 2
and since |T (ω)| → 0 when ω → ∞, y(t) only misses the harmonics of yr (t) at the
highest frequencies. This yields the smoothed edges and the small delay4 .

4
Note that this is unavoidable since a bounded control action can never achieve T (ω) = 1 ∀ω.

7
10.h. The measurement error can be seen as a second input to the system. Consequently,

Y (ω) = T1 (ω)Yr (ω) + T2 (ω)V (ω)

where V (ω) is the Fourier transform of v(t) and T1 (ω), T2 (ω) are two frequency responses
that have to be determined. Following a similar reasoning as in item [d.], we can deduce
T1 (ω) and T2 (ω) as follows. First note that:

X(ω) = C(ω) (Yr (ω) − Y (ω) − V (ω))

The last equation combined with Y (ω) = H(ω)X(ω) yields:

Y (ω) = H(ω)X(ω) = H(ω)C(ω) (Yr (ω) − Y (ω) − V (ω))

=⇒ (1 + H(ω)C(ω)) Y (ω) = H(ω)C(ω) (Yr (ω) − V (ω))

H(ω)C(ω) H(ω)C(ω)
=⇒ Y (ω) = Yr (ω) − V (ω)
1 + H(ω)C(ω) 1 + H(ω)C(ω)

Consequently, T1 (ω) = −T2 (ω) = T (ω). If we denote by y1 (t) the output obtained when
there is no measurement error i.e. the output depicted in Figure 8, the actual output y(t)
of the system5 when there is a measurement error is given by:

y(t) = y1 (t) + |T (100π)| sin(100πt + ∠T (100π))


| {z }
=0.0001

Since the perturbation induced by v(t) on the actual output is a sinusoid of amplitude 0.0001
degree and is therefore negligeable, we have that y(t) ≈ y1 (t).

5
NOT the measured output ymeasured (t)

8
60 0.9

0.8
50

0.7

40 0.6

0.5
|bk|

|bk|
30

0.4

20 0.3

0.2
10
0.1

0 0
0 20 40 60 80 100 120 40 50 60 70 80 90 100 110 120
k k

60 0.9

0.8
50

0.7

40 0.6
|b | |H(k ω )|

|b | |H(k ω )|
0

0.5
0

30
k

0.4

20 0.3

0.2
10
0.1

0 0
0 20 40 60 80 100 120 40 50 60 70 80 90 100 110 120
k k

60 0.9

0.8
50

0.7

40 0.6
|b | |T(k ω )|

|b | |T(k ω )|

0.5
0

30
k

0.4

20 0.3

0.2
10
0.1

0 0
0 20 40 60 80 100 120 40 50 60 70 80 90 100 110 120
k k

Table 1: Top Left: |bk | in the interval [0 120]; Top Right: |bk | in the interval [40 120];
Middle Left: |bk ||H(kω0)| in the interval [0 120]; Middle Right: |bk ||H(kω0)| in the interval
[40 120]; Bottom Left: |bk ||T (kω0 )| in the interval [0 120]; Bottom Right: |bk ||T (kω0 )| in
the interval [40 120];

9
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 4:
DISCRETE-TIME SIGNALS AND FOURIER TRANSFORM - SAMPLING

PART 1: DISCRETE-TIME SIGNALS AND FOURIER TRANSFORM

A discrete-time signal x[n] is defined only for integer values of n. A discrete-time can
e.g. be generated by sampling a continuous-time signal x(t) with a sampling period Ts i.e.
x[n] = x(t = nTs ) (see Part 2).

A discrete-time signal x[n] is periodic if and only if we can find an integer r such that
x[n + r] = x[n] for all integers n. The smallest integer r having this propriety is called the
fundamental period of x[n]. See p. 16 of the book for more details.

The discrete-time Fourier transform DTFT F (x[n]) = X(Ω) of a discrete-time signal x[n]
describes the frequency content of the signal x[n]. X(ω) is a complex function of the
frequency Ω. It is defined as (see expression (4.2) in the book):


X
X(Ω) = x[n] e−jΩn
n=−∞

The DTFT is always periodic with period 2π i.e. X(Ω + 2π) = X(Ω). The DTFT of a given
signal x[n] can be computed

• either by applying the definition (thus by evaluating the summation above). In some
cases, an important formula for doing this is the following one:

q2
X r q1 − r q2+1
rk = expression (4.5) in the book
k=q1
1−r

which can be applied when r is a real or complex number1 and when q1 < q2

• or by using standard DTFT pairs in combination with properties of the DTFT. Ta-
ble 4.1 on page 177 of the book summarizes the DTFTs of some common signals. The
properties of the DTFT are summarized in Table 4.2 on page 178.

The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.

Exercise 1. Determine the value of the discrete-time signal x[n] = u[n]−2u[n−1]+ u[n−4]
for each integer n (u[n] is the discrete-time unit-step function (see p. 14 of the book)).

1
When q2 = +∞, the formula only holds when |r| < 1

1
Exercise 2. Let us consider the continuous-time signal x(t) = cos(t). This signal is sampled
with a sampling period Ts = 0.1. Is the obtained discrete-time signal x[n] periodic ? What
happens if we change the sampling period to Ts = 0.1 π3 = 0.10472...?

Exercise 3. Consider the following discrete-time signal x[n]:



 1 for n = 0 and n = 2
x[n] = −1 for n = 1
0 elsewhere

a. Compute the DTFT X(Ω) of the finite-time signal x[n].

b. Show that X(Ω) is periodic with period 2π.

Exercise 4. Let a, b, Ω0 be given constants (|a| < 1 and b is an integer). Compute the
(generalized if necessary) DTFT X(Ω) of the following discrete-time signals:

a. x[n] = an u[n] with u[n] the discrete-time unit step function (see page 14 of the book)

b. x[n] = a|n| with |.| the absolute value operator. Use the fact that x[n] = a−n when
n < 0.

c. x[n] = δ[n − b] with δ[n] the unit-pulse function (see page 15). Use a property of the
DTFT.

d. x[n] = an sin(Ω0 n) u[n]. Use a property of the DTFT

PART 2: SAMPLING

Computers cannot deal with continuous-time signals. To analyze the property of a signal or
for other use involving computers, signals have to be sampled. Consequently, a discrete-time
signal x[n] can be the sampled version of a continuous-time signal x(t):

x[n] = x(t = nTs )

where Ts is the sampling period. The sampling frequency ωs is then defined as ωs = Ts
.

When x[n] = x(t = nTs ), there exists a simple relation between the continuous-time Fourier
transform X(ω) of the original continuous-time signal x(t) and the DTFT X(Ω) of the
sampled signal x[n]. This relation is illustrated in the figure below:

2
X(ω)

T s X s(ω)

-2 ω s -ω s − ω s/2 ω s/2 ωs 2ω s ω

Ts X(Ω)

-4π -2 π −π π 2π 4π Ω

In this figure, we see that, starting from the continuous-time X(ω), we construct Xs (ω) by
summing up shifted versions of X(ω):


1 X
Xs (ω) = X(ω − kωs ) (1)
Ts k=−∞
1
= (... + X(ω − 2ωs ) + X(ω − ωs ) + X(ω) + X(ω + ωs ) + X(ω + 2ωs ) + ...)(2)
Ts

Note that the factor T1s in the equations above is just a scaling factor and is therefore not
really important. The DTFT X(Ω) is then directly obtained from Xs (ω) by replacing the
frequency ω by the normalized frequency Ω = ωTs :

X(Ω) = Xs (ω = ) (3)
Ts
or equivalently:

Xs (ω) = X(Ω = ωTs ) (4)


π ωs
The frequency normalization is such that Ω = π corresponds to ω = Ts
= 2
and such that
Ω = 2π corresponds to ω = ωs .

Observations. The important thing to note here is that Xs (ω) and X(Ω) contain the same
information: we indeed see in (3) that X(Ω) is just Xs (ω) expressed with the normalized
frequency Ω. 2 The period of X(Ω) being 2π, Xs (ω) is therefore a periodic function with
period ωs .

The important question now is: have we lost information by sampling? ANSWER: no in-
formation is lost if and only if Ts Xs (ω) = X(ω) for all ω ∈ [− ω2s ω2s ]. In other words, no
information is lost if the main interval [− ω2s ω2s ] of Xs (ω) (or the main interval [−π π] of
X(Ω)) presents a non-distorted image of X(ω).

2
If Ts = 1, we even have that ω = Ω and that Xs (ω) is equivalent to X(Ω).

3
A necessary and sufficient condition for this is that the sampling frequency ωs is chosen
larger than twice the highest frequency present in the Fourier transform X(ω) of the original
continuous-time signal x(t) (Shannon’s theorem). In other words, if X(ω) = 0 for |ω| > B
(B given), then ωs should be chosen such that ωs > 2B. In the figure on page 3, we see that
the condition of Shannon’s theorem is respected (X(ω) = 0 for all |ω| > ω2s ). This in turn
explains that X(Ω) is a non-distorted version of X(ω) in its main interval [−π π].

When ωs has been chosen according to Shannon’s theorem, no information is lost through
sampling and, if we would like so, the continuous-time signal x(t) can be reconstructed from
its sampled version x[n] using the following relation:
+∞
sin ω2s (t − nTs )

1 X
x(t) = x[n] (5)
π n=−∞ t − nTs
Note that this relation is non causal. Indeed, to compute the continuous-time signal at time
t, you need x[n] from n = −∞ to n = ∞. This is in fact logical since the reconstruction can
be interpreted as filtering the “fictive” (continuous-time) signal corresponding to Xs (ω) with
an ideal low-pass filter with cut-off frequency3 ωcut = ω2s and we have seen in exercise 8 of
session 3 that ideal filtering is an non-causal operation. Moreover, it is also to be noted that
this non-causal operation has to be performed for all t ∈ R to reconstruct the original signal.
Consequently, even though we will consider this relation for this exercise session, this rela-
tion is an ideal reconstruction mechanism which is rarely used in practice. Instead, simpler
methods such as the Zero Order Hold (ZOH) mechanism is used to generate continuous-time
signals from discrete-time signals. The ZOH mechanism will be discussed in Session 5.

Let us now see what happens when Shannon’s theorem is not respected. Shannon’s theorem
is not respected if a signal x(t) is sampled with a sampling frequency ωs while its Fourier
transform X(ω) is nonzero for |ω| > ω2s . In this case, there is overlapping between the
different terms in the summation on the right-hand side of (2). Consequently, X(Ω) is no
longer a perfect image of X(ω) in its main interval [−π π]. This phenomenon is called
aliasing. When aliasing occurs, it is completely impossible to reconstruct x(t) from the
sampled signal even if we use (5). In other words, if Shannon’s theorem is not satisfied,
+∞ ωs

1 X sin 2
(t
− nTs )
x(t) 6= x[n]
π n=−∞
t − nTs

In exercises 5 and 6, we will study the theory above in detail to gain understanding.

Exercise 5. Consider the continuous-time signal x(t) = cos10t. This signal is sampled with
π
a sampling frequency ωs = 30 rad/s (i.e. Ts = 15 s). This delivers the discrete-time signal
x[n].

a. Determine the continuous-time Fourier transform of x(t).


3
In fact, ωcut can be chosen between B and ωs /2.

4
1

0.8

0.6

0.4

0.2

−0.2

−0.4

−0.6

−0.8

−1

0 1 2 3 4 5
time [s]

πn
Figure 1: Exercise 5: x(t) = cos(10t) (red dashed) and x[n] = x(t = 15
) for n = 0...29 (blue
circle)

0.8

0.6

0.4

0.2

−0.2

−0.4

−0.6

−0.8

−1

0 1 2 3 4 5
time [s]

4πn
Figure 2: Exercise 6: x(t) = cos(10t) (red dashed) and x[n] = x(t = 15
) for n = 0...8 (blue
circle)

5
b. Give an expression for x[n].

c. Is Shannon’s theorem respected?

d. Give an expression for the DTFT X(Ω) of x[n] using the properties of the DTFT (see
Table 4.1). At which normalized frequencies Ω has X(Ω) delta-impulses in its main
interval [−π π]? To which actual frequencies in rad/s do these normalized frequencies
correspond?

e. Is X(Ω) in its main interval a non-distorted image of X(ω)?

f. Determine the DTFT X(Ω) of x[n] starting from the Fourier transform X(ω) of x(t)
i.e. by using the relations (1)-(2) and (3). Verify that we obtain the same result as in
item [e.]

g. Give an expression for the continuous-time signal xrec (t) that is obtained by applying
the ideal reconstruction mechanism (presented in the summary of the theory) on the
discrete-time signal x[n].
15
Exercise 6. Answer the same questions as in Exercise 5 when ωs is chosen equal to ωs = 2
rad/s (i.e. Ts = 4π
15
s).

Moreover, we add this extra question:

h. The signals x[n] of Exercises 5 and 6 are compared with x(t) in Figures 1 and 2,
respectively. Do these figures confirm what has been deduced in both exercises, in
particular with respect to Shannon’s theorem?

Exercise 7. A continuous-time signal x(t) is made-up of two parts x1 (t) and x2 (t):
=x2 (t)
z }| {
x(t) = x1 (t) + Acos(100πt)

The signal x1 (t) is the signal of interest and is band-limited with a bandwidth ωB = 30π rad/s
i.e. X1 (ω) = 0 for |ω| > ωB . The signal x2 (t) is an undesirable disturbance at 50 Hz which
is due to the electricity network and which has to be filtered away.

For further use in a computer, the signal has to be sampled at a sampling frequency
1
ωs = 80π rad/s (i.e. Ts = 40 s). In this exercise, we will analyze what is this best method
to filter away x2 (t) and to obtain a sampled signal which is a faithful image of x1 (t). For
this purpose, we will need an anti-aliasing filter (see later).

a. Is the sampling frequency ωs sufficiently high to ensure that the DTFT X1 (Ω) of the
sampled signal x1 [n] = x1 (nTs ) is, in its main interval [−π π], a non-distorted image
of the continuous-time Fourier transform X1 (ω) of x1 (t)

6
b. Suppose that, before the sampling device, the signal x(t) is filtered by a filter whose
frequency response F (ω) is given by:

1 for −40π ≤ ω ≤ 40π
F (ω) =
0 elsewhere

Such a filter is generally called an anti-aliasing filter. Denote by z(t) the signal obtained
by filtering x(t) by this filter. This signal z(t) is the signal entering the sampling device
which delivers z[n]. Show that the DTFT Z(Ω) of z[n] is equal to X1 (Ω), the DTFT
of x1 [n] found in item [a.].

c. Why is it important to place the filter F (ω) before the sampling device and not after?

Exercise 8 (Examination November 2006). Some periodic vibrations due to air tur-
bulence are very hazardous for the wings of an aircraft. It is therefore important to detect
them when they are occurring in order to compensate them. Suppose these hazardous vi-
brations are vibrations at ω0 and 2ω0 for ω0 = 20π rad/s. An engineer at AIRBUS has
developed a digital device to detect those vibrations by spectral analysis. To explain how
this device works, we first denote the continuous-time vibration (displacement) of the wing
at a particular spot by x(t). The detection device works in three steps:

• Step 1: The vibration x(t) is measured as illustrated in the figure below:

x(t) z(t) z[n]


Filter F(ω) Sampling

In this picture, the filter F (ω) is a so-called anti-aliasing filter and is given by:

1 for −50π ≤ ω ≤ 50π
F (ω) =
0 elsewhere

and the sampling occurs with a sampling frequency equal to ωs = 100π rad/s.

• Step 2: the DTFT Z(Ω) of z[n] is computed and represented in the main interval
[−π , π].

• Step 3: It is then verified whether the main interval of Z(Ω) exhibits delta-impulses
at the normalized frequencies Ω = ±0.4π and/or at Ω = ±0.8π. If such delta-impulses
are detected, an alarm goes off.

a. Show that the proposed device is able to detect hazardous vibrations x(t) = cos(ω0t)
and x(t) = cos(2ω0 t).

b. How does the detection device respond to a vibration x(t) = cos(120πt) (non-hazardous
vibration)? Explain your answer.

7
c. How would the detection device respond to a vibration x(t) = cos(120πt) (non-
hazardous vibration) if the filter F (ω) would be absent? Explain your answer.

1.5

0.5

0
−30 −20 −10 0 10 20 30
ω

Figure 3: |X(ω)| (solid) and |Ts Xs (ω)| when Ts = 0.2π (dashed) and when Ts = 0.1π
(dotted)

Exercise 9. Consider the continuous-time signal x(t) = e−t u(t). This signal is sampled
with a period Ts . This delivers the signal x[n] = x(t = nTs ).

a. Determine the continuous-time Fourier transform of x(t).


b. Determine the DTFT X(Ω) of x[n]. Deduce also Xs (ω) from X(Ω) based on formula (4)
in the theory summary.
c. The modulus of X(ω) and of Ts Xs (ω) (for Ts = 0.2π and for Ts = 0.1π) are represented
in Figure 3. Explain why, in both cases, Xs (ω) is a distorted image of X(ω).
d. Suppose now that, before the sampling of x(t), x(t) is filtered by an (anti-aliasing)
filter:

1 for −10 ≤ ω ≤ 10
F (ω) =
0 elsewhere

This filtering operation delivers the signal y(t) whose Fourier transform is denoted
Y (ω). Draw the modulus of Y (ω) using the information in Figure 3.
e. The signal y(t) is subsequently sampled with sampling period Ts = 0.1π s. Draw the
modulus of Ys (ω) = Y (Ω = ωTs ) using what has been found in item [d.].
f. Let us define:

+∞ ωs

1 X sin 2
(t
− nTs )
yrec (t) = y[n] (6)
π n=−∞
t − nTs
+∞ ωs

1 X sin 2
(t
− nTs )
xrec (t) = x[n] (7)
π n=−∞
t − nTs

8
where y[n] and x[n] have been defined above and Ts = 0.1π s, ωs = 20 rad/s. Show,
via a Fourier transform analysis, that the signal yrec (t) reconstructed with y[n] is a
better image of x(t) than the signal xrec (t) reconstructed with x[n]. In other words,
show that the presence of the anti-aliasing filter is beneficial.

Exercise 10. CD recording. Consider the continuous-time signal x(t) corresponding to


a piece of music lasting 600 s. The signal x(t) is not band-limited. However, it is important
to note that all frequencies above 120000 rad/s can not heard by a human being. We would
like to record this piece of music onto a CD. For this purpose, the signal has to sampled and
each element x[n] of the sampled signal will then be engraved onto the CD.

a. Show that the number of samples that has to be engraved increases with increasing
values of the sampling frequency ωs

We would like to keep the number of samples that has to be engraved as small as possi-
ble while guaranteeing that the engraved information contains all the hearable frequency
information of x(t).

b. Determine a device made up of a sampling block and an anti-aliasing filter which


generates a sampled signal having the property described above. Exercise 9 is a good
source of inspiration for this.

9
Solutions

Exercise 1. The signal x[n] is equal to u[n] + y[n] + z[n] where y[n] = −2u[n − 1] and
z[n] = u[n − 4]. The discrete-time unit-step function u[n] is given by:

0 f or n < 0
u[n] =
1 f or n ≥ 0

The signal y[n] is made up with u[n − 1] which is u[n] right-shifted of one sample. Conse-
quently, y[n] = −2u[n − 1] is given by:

0 f or n < 1
y[n] =
−2 f or n ≥ 1

Using a similar reasoning, we find that:



0 f or n < 4
z[n] =
1 f or n ≥ 4

Consequently, x[n] = u[n] + y[n] + z[n] is given by:



 x[n] = 1 f or n = 0
x[n] = −1 f or n = 1, 2 and 3
x[n] = 0 f or any other n

Exercise 2. The signal x[n] = x(0.1 n) = cos(0.1n) is not periodic since we cannot find any
integer rx such that 0.1rx = 2πq for some integer q (see page 16). For the second value of
Ts , we can prove using the same procedure that the discrete-time signal x[n] = cos(0.1 π3 n)
is periodic of (fundamental) period rx = 60. This result can be checked as follows:
π
x[n + 60] = cos(0.1 n + 2π) = x[n]
3
Exercise 3.
P∞
3.a. The DTFT X(Ω) = n=−∞ x[n]e−jΩn of x[n] is here:

X(Ω) = 1 − e−jΩ + e−2jΩ


= e−jΩ ejΩ − 1 + e−jΩ


= e−jΩ (2cos(Ω) − 1)

3.b. The DTFT is periodic of period 2π. Indeed,



X(Ω + 2π) = e−j(Ω+2π) (2cos(Ω + 2π) − 1) = e−jΩ (2cos(Ω) − 1) = X(Ω)

10
Exercise 4.

4.a. Based on the definition of the DTFT, we obtain successively:


X ∞
X
−jΩn
X(Ω) = x[n]e = an e−jΩn
n=−∞ n=0

X
= (ae−jΩ )n
n=0
1
=
1 − ae−jΩ
where, for the last equivalence, we have used formula (4.5).

4.b. Using the definition, we obtain:



X −1
X
X(Ω) = an e−jΩn + a−n e−jΩn
n=0 n=−∞

Posing k = −n in the second summation, the expression can be rewritten as follows:


X ∞
X
X(Ω) = (ae−jΩ )n + (aejΩ )k
n=0 k=1

1 aejΩ
= +
1 − ae−jΩ 1 − aejΩ
1 − a2
=
1 + a2 − 2acos(Ω)

4.c. x[n] = z[n − b] with z[n] = δ[n]. Consequently, X(Ω) = e−jΩbZ(Ω) (shift in time
property). Since Z(Ω) = 1, we obtain X(Ω) = e−jΩb .

4.d. First, observe that x[n] = z[n]sin(Ω0 n) with z[n] = an u[n]. In item [a.], we have proven
that Z(Ω) = 1/(1 − ae−jΩ ) and, using the property of multiplication by a sine, we find that
the DTFT of z[n]sin(Ω0 n) is 0.5 j (Z(Ω + Ω0 ) − Z(Ω − Ω0 )). Consequently,
 
j 1 1
X(Ω) = −
2 1 − ae−j(Ω+Ω0 ) 1 − ae−j(Ω−Ω0 )

Exercise 5.

11
5.a. Using Table 3.2 of the book, we see that the continuous-time Fourier transform of
x(t) is given by X(ω) = π (δ(ω − 10) + δ(ω + 10)). X(ω) is represented in the above plot of
Figure 4.

π
5.b. Since Ts = 15
, the sampled signal x[n] is given by

π 2π
x[n] = x(t = nTs ) = cos(10n ) = cos( n)
15 3
5.c. Shannon’s theorem is here respected since the highest non-zero frequency in X(ω) is
10 rad/s and we have chosen ωs > 20 rad/s.

5.d. The DTFT of x[n] is given by (see Table 4.1)



X 2π 2π
X(Ω) = π δ(Ω + − 2πk) + δ(Ω − − 2πk)
k=−∞
3 3

The DTFT X(Ω) is made up of summation of delta-impulses and is periodic of period 2π


such as every DTFT. In the main interval [−π π] for Ω, there are only two delta-impulses
(when k = 0) and they are located at Ω = ± 2π
3
. Using the relation ω = TΩs , we obtain that
± 2π
these two frequencies correspond to ω = π
3
= ±10 rad/s.
15

5.e. The DTFT X(Ω) in its main interval presents two delta-impulses at frequencies corre-
sponding to ±10 rad/s. The continuous-time Fourier transform of x(t) presents two delta-
impulses at ±10 rad/s. We see thus that X(Ω) is a non-distorted image of X(ω).

5.f. We will first answer item [f.] in a graphical way. For this purpose, Xs (ω) can be deduced
as shown in Figure 4. We will here just deduce the main interval [−15 15] of Xs (ω) since
we know that Xs (ω) is periodic. We start with X(ω) (above plot). Then, to X(ω), we add
X(ω − ωs ) (ωs = 30). This delivers the middle plot. We continue by adding X(ω + ωs ). This
delivers the bottom plot. Wee see that the addition of both X(ω − ωs ) and X(ω + ωs ) do not
change anything in the main interval. This will be also the case for the terms X(ω − kωs ) for
|k| > 1 since X(ω − kωs ) presents delta-impulses at the frequencies ω = kωs ± 10 = 30k ± 10.
These frequencies are not located in the interval [−15 15] for |k| > 0. Consequently, we
conclude that the main interval [−15 15] of Xs (ω) presents two delta-impulses at ω = ±10.
Now that we have determined Xs (ω) in its main interval, it is straightforward to determine
X(Ω) in its main interval by recalling that Ω = ωTs : the main interval of X(Ω) presents
delta-impulses at Ω = ±10Ts = ± 10π 15
= ± 2π
3
which is what we had found in item [d.]. Note
that what is represented in the interval [−15 15] in Figure 4 is in fact Ts Xs (ω) (see (1)).

We can also answer item [f.] in a mathematical way. Starting from


X(ω) = π (δ(ω − 10) + δ(ω + 10)), we use (1) to determine Xs (ω):

12
X(ω)

(π) (π)

ω
−10 0 10

X(ω-ω s)+X(ω)

(π) (π) (π) (π)


ω
−10 10 20 ωs 40

X(ω-ωs )+X(ω)+X(ω+ωs)

(π) (π) (π) (π) (π) (π)

ω
−40 −ω s −20 −10 10 20 ωs 40

, Main interval [-15 15]

Figure 4: Above: X(ω) for x(t) = cos(10t). Middle: X(ω − ωs ) + X(ω). Bottom: X(ω −
ωs ) + X(ω) + X(ω + ωs ) with ωs = 30 rad/s


!
π X
Xs (ω) = δ(ω − 10 − kωs ) + δ(ω + 10 − kωs )
Ts k=−∞

!
π X
= δ(ω − 10 − 30k) + δ(ω + 10 − 30k) since ωs = 30 rad/s
Ts k=−∞

π
Using now formula (3) and recalling that Ts = 15
, we obtain:


!
Ω π X 15Ω 15Ω
X(Ω) = Xs (ω = ) = π δ( − 10 − 30k) + δ( + 10 − 30k)
Ts 15 k=−∞
π π
∞ 2π 2π !
Ω− − 2πk Ω+ − 2πk
  
π X
3 3
= π δ π + δ π
15 k=−∞ 15 15

Now using the following property of the Dirac impulse:


 

δ = α δ(Ω),
α

we obtain that

13

!
X 2π 2π
X(Ω) = π δ(Ω − − 2πk) + δ(Ω + − 2πk)
k=−∞
3 3

Consequently, as expected, we obtain the same X(Ω) as in item [d.].

5.g. As mentioned in the summary of theory, the reconstruction formula is equivalent


to filtering the signal corresponding to Xs (ω) by an ideal low-pass filter whose frequency
response H(ω) is given by:

Ts for − ω2s ≤ ω ≤ ω2s


 
Ts for −15 ≤ ω ≤ 15
H(ω) = =
0 elsewhere 0 elsewhere
1
where the factor Ts in the expression of the filter is just there to compensate the factor Ts
in the definition of Xs (ω) (see (1)).

Consequently, the Fourier transform Xrec (ω) of xrec (t) is given by:

Xrec (ω) = π (δ(ω − 10) + δ(ω + 10))

which is equal to X(ω). Thus xrec (t) = x(t). This is logical since Shannon’s theorem was
respected.

Exercise 6.

6.a. X(ω) is the same as in [5.a.].

6.b. The sampled signal x[n] is here given by:


4π 8π 2π 2π
x[n] = x(nTs ) = cos(10n ) = cos( n) = cos( n + 2πn) = cos( n)
15 3 3 3
6.c. Shannon’s theorem is here not respected since ωs < 20.

6.d. The sampled signal is here the same as in exercise 5. Consequently, the DTFT of x[n]
is also the same as in item [5.d.] and has, in its main interval [−π π], two delta impulses
at Ω = ± 2π3
. However, since Ts is different, these two frequencies do not correspond to 10
± 2π
rad/s but to ω = 4π
3
= ± 52 rad/s.
15

6.e. The DTFT X(Ω) in its main interval presents two delta-impulses at frequencies corre-
sponding to ± 52 rad/s. The continuous-time Fourier transform of x(t) presents two delta-
impulses at ±10 rad/s. We see thus that, in its main interval, X(Ω) is NOT a faithful image
of X(ω). This is logical since Shannon’s theorem is here not respected.

6.f. The easiest way here is to deduce graphically the main interval of Xs (ω) and subse-
quently the main interval of X(Ω) . This is done in Figure 5. Note that the main interval of

14
Xs (ω) is here [− 15
4 4
15
]. We here also start with X(ω) (above plot). We see that X(ω) has
no contribution in the main interval. We now add X(ω − ωs ) = X(ω − 7.5) to X(ω). This
delivers the middle plot. We continue by adding X(ω + ωs ). This delivers the bottom plot.
We see that both X(ω −ωs ) and X(ω +ωs ) have contribution in the main interval. These two
terms are in fact the only terms in the summation (2) which have contribution in the main
interval. Indeed, X(ω − kωs ) has delta-impulses at the frequencies ω = kωs ± 10 = 15k 2
± 10
which are not located in the main interval for |k| > 1. Consequently, we conclude that
the main interval [− 15 4 4
15
] presents two delta-impulses at ω = ± 25 . Now that we have de-
termined Xs (ω) in its main interval, it is straightforward to determine X(Ω) in its main
interval by recalling that Ω = ωTs : the main interval of X(Ω) presents delta-impulses at
Ω = ± 52 Ts = ± 20π30
= ± 2π 3
which is what we had found in item [d]. Note that what is
represented in the interval [− 15 15
4 4
] in Figure 5 is in fact Ts Xs (ω) (see (1)).

X (ω)

(π) (π)

ω
−10 0 10

X(ω-ω s)+X(ω)

(π) (π) (π) (π)


ω
−10 −2.5 10 17.5

X(ω-ω s)+X(ω)+X(ω+ω s)
(π)
(π) (π) (π) (π)
(π)
ω
−17.5 −10 −2.5 2.5 10 17.5

, Main interval [-15/4 15/4]

Figure 5: Above: X(ω) for x(t) = cos(10t). Middle: X(ω − ωs ) + X(ω). Bottom: X(ω −
ωs ) + X(ω) + X(ω + ωs ) with ωs = 7.5 rad/s

6.g. As mentioned in the summary of theory, the reconstruction formula is equivalent


to filtering the signal corresponding to Xs (ω) by an ideal low-pass filter whose frequency
response H(ω) is given by:
Ts for − ω2s ≤ ω ≤ ω2s Ts for − 15 15
 
4
≤ω≤ 4
H(ω) = =
0 elsewhere 0 elsewhere
Consequently, the Fourier transform Xrec (ω) of xrec (t) is given by:
 
5 5
Xrec (ω) = π δ(ω − ) + δ(ω + )
2 2

15
which is equal to the continuous-time Fourier transform of cos( 52 t). Thus xrec (t) = cos( 52 t)
and is thus completely different from x(t) = cos(10t). This is logical since Shannon’s theorem
was here not respected and information is lost through sampling (here the lost information
is the actual frequency of the signal x(t)).

6.h. To sum up what we have seen in Exercises 5 and 6, we can say that the signal
x[n] = x(t = nTs ) is a good image of x(t) in Exercise 5 since Shannon’s theorem is re-
spected while the aliasing in Exercise 6 makes of x[n] a cosine corresponding to a much
lower frequency4 (i.e. ω = 2.5 rad/s) than the frequency of x(t) (i.e. ω = 10 rad/s).
This is confirmed by what we observe in Figures 1 and 2. In Figure 2, we indeed see that
x[n] = x(t = nTs ) corresponding to Exercise 6 is a periodic signal with a slower oscillation
(i.e. a smaller frequency) than x(t) while, in Figure 1, x[n] and x(t) have the same frequency.

Exercise 7.

7.a. We observe that the sampling frequency ωs = 80π rad/s fulfills Shannon’s theorem for
the signal of interest x1 (t). Indeed ωB < ω2s . Consequently, the sampled signal x1 [n] will
have a DTFT X1 (Ω) which, in its main interval [−π π], will be a non-distorted image of the
continuous-time Fourier transform X1 (ω) of x1 (t).

7.b. The continuous-time Fourier transform X(ω) of x(t) is:

X(ω) = X1 (ω) + A π ( δ(ω + 100π) + δ(ω − 100π))

with X1 (ω) = 0 for all |ω| > 30π. X(ω) is represented below for a triangular X1 (ω) (left
plot: X1 (ω); right plot X(ω)):
X 1(ω) X(ω)

(Απ) (Απ)
ω
−ω Β 0 ωΒ ω −100π −ωΒ 0 ωΒ 100π

The Fourier transform Z(ω) of z(t) is given by F (ω)X(ω) and is thus equal to X1 (ω). This
yields to z(t) = x1 (t). The sampled signal z[n] is thus equal to x1 [n] and Z(Ω) = X1 (Ω) for
all Ω.

7.c. If we directly sample x(t), we obtain x[n] = x1 [n] + x2 [n]. As observed in item [a.],
the signal x1 [n] has a DTFT X1 (Ω) which, in the main interval, is a non-distorted image of
X1 (ω). Consequently, in this interval [−π π], X1 (Ω) is non-zero only between −ΩB and ΩB
with ΩB = Ts ωB = 3π 4
.

Note now that ωs is not high enough for Shannon’s theorem to hold for x2 (t). Aliasing
will occur. The signal x2 [n] is given by:
4
By frequency, we mean here the actual frequency in rad/s, NOT the normalized frequency Ω.

16
1
x2 [n] = Acos(100πnTs ) = Acos(100πn ) = Acos(0.5πn + 2πn) = Acos(0.5πn)
40
which has a DTFT X2 (Ω) given by:

!
X
X2 (Ω) = A π δ(Ω + 0.5π − 2πk) + δ(Ω − 0.5π − 2πk)
k=−∞

In the main interval for Ω, X2 (Ω) presents two delta-impulses at Ω = ±0.5π. Consequently,
the DTFT X(Ω) = X1 (Ω)+X2 (Ω) of x[n] can be represented as follows in the interval [−π π]
if we assume a triangular X1 (Ω) (left plot: X1 (Ω); right plot X(Ω)):
X 1(Ω) X(Ω)

(Απ) (Απ)
Ω Ω
−ΩΒ 0 ΩΒ −ΩΒ π/2 0 π/2 Ω Β

Because of the aliasing, the perturbation x2 (t) lying outside the bandwidth of x1 (t) before
sampling is thus transformed into a perturbation at a frequency within the bandwidth of
x1 [n]. The DTFT X2 (Ω) of the sampled perturbation lying within the bandwidth of the
signal of interest, it is no longer possible to remove it without harming the signal of interest.

Exercise 8.

8.a. Let us begin with the case x(t) = cos(ω0 t) = cos(20πt). Since ω0 is smaller than 50π,
the signal z(t) obtained after filtering with F (ω) is equal to x(t). The signal z[n] is therefore
given by z[n] = cos(ω0 nTs ) = cos(0.4πn). Consequently, we see that the DTFT of z[n] is
given by:

X
X(Ω) = π δ(Ω + 0.4π − 2πk) + δ(Ω − 0.4π − 2πk)
k=−∞

which, in its main interval, only presents two delta-impulses one at Ω = 0.4π and one
at Ω = −0.4π. These delta-impulses will launch the alarm. The same can be said for
x(t) = cos(2ω0 t). Indeed, 2ω0 is also smaller than 50π and the signal z(t) obtained af-
ter filtering with F (ω) is thus also equal to x(t). The signal z[n] is here given by z[n] =
cos(2ω0 nTs ) = cos(0.8πn). Consequently, we see that the DTFT of z[n] will present, in its
main interval, delta-impulses at Ω = ±0.8π which will launch the alarm.

8.b. The signal x(t) will be filtered away by F (ω). Thus, z(t) = 0 ∀t and z[n] = 0 ∀n. No
alarm is launched since Z(Ω) = 0 and does not present any delta-impulses.

8.c. If there is no filter z(t) = x(t). Therefore, we have:

17
12 2
z[n] = x[n] = cos(120πnTs ) = cos( πn) = cos( πn + 2πn) = cos(0.4πn)
5 5
As shown in item [a.], the DTFT of such a signal z[n] will present delta-impulses at Ω =
±0.4π which will launch the alarm while the vibration is not dangerous. The anti-aliasing
filter is thus absolutely necessary.

Exercise 9.

9.a. The Fourier transform X(ω) of x(t) is given by (see Table 3.2):
1
X(ω) =
1 + jω

Note that x(t) is not band-limited since there exists no ωB such that X(ω) = 0 for |ω| > ωB .

9.b. Given Ts , the sampled signal has the following expression x[n] = e−nTs u[n] = bn u[n]
with b = e−Ts . The DTFT X(Ω) of x[n] is given by:
1
X(Ω) =
1 − be−jΩ
Using then relation (4), we see that Xs (ω) = X(Ω = Ts ω) and thus that:
1
Xs (ω) =
1 − be−jωTs

As always, this function is periodic with period ωs = 2π


Ts
. This is what we observe in Figure 3.
The dashed curve has a period of 10 rad/s and the dotted curve a period of 20 rad/s.

9.c. In item [a.], we already observed that X(ω) is nonzero at each ω. Consequently, the
sampling frequency can never be chosen in such a way that Shannon’s theorem holds. Con-
sequently, whatever the value of ωs (and thus Ts ), aliasing will occur. Consequently, Xs (ω)
cannot be a perfect image of X(ω). We nevertheless see that the aliasing effects decrease
when Ts decreases. It is logic since X(ω) is smaller and smaller for increasing |ω| and thus
the overlapping due to the shifted terms X(ω − kωs ) is smaller for increasing values of ωs .

9.d. The Fourier transform Y (ω) of y(t) (output of the filter) is:

X(ω) for −10 ≤ ω ≤ 10
0 elsewhere

Y (ω) is thus a truncated version of X(ω) and its modulus can thus be represented as in
Figure 6.

18
9.e. Due to the fact that Y (ω) = 0 for |ω| > 10, sampling y(t) at a sampling frequency
of 20 rad/s will not cause aliasing (i.e. Shannon’s theorem is respected). Consequently, in
its main interval [−10 10], Ys (ω) will be a faithful image of Y (ω) i.e. Ts Ys (ω) = Y (ω) and
thus |Ts Ys (ω)| = |Y (ω)|. Since Ys (ω) is periodic with period ωs = 20 rad/s, we can thus
represent |Ts Ys (ω)| as in Figure 7.

9.f. As mentioned in the summary of theory, the reconstruction formula (6) is equivalent
to filtering the signal corresponding to Ys (ω) by an ideal low-pass filter whose frequency
response H(ω) is given by:

Ts for −10 ≤ ω ≤ 10
H(ω) =
0 elsewhere

Consequently, we obtain that the Fourier Transform Yrec (ω) of yrec (t) is precisely equal to
Y (ω) and thus that yrec(t) = y(t).

The signal y(t) is only a filtered version of x(t). Thus, we do not have that yrec (t) =
x(t). However, the application of an anti-aliasing filter is beneficial. Indeed, we have that
Yrec (ω) = X(ω) for all ω ∈ [−10 10] which is not the case when the anti-aliasing filter is
absent. Indeed, if there is no anti-aliasing filter, the reconstructed signal is given by (7)
and, following a similar reasoning as above, we see that the Fourier transform Xrec (ω) of
xrec (t) will be equal to H(ω)Xs(ω) with the Xs (ω) represented in Figure 3. In Figure 8, we
compare the modulus of Xrec (ω) and Yrec (ω). Since Yrec (ω) = Y (ω) is equal to X(ω) for
the frequencies in [−10 10], we see in this last figure that Yrec (ω) is a better image of X(ω)
than Xrec (ω) and, consequently, the signal yrec (t) reconstructed with y[n] is a better image
of x(t) than the signal xrec (t) reconstructed with x[n].

1.5

0.5

0
−30 −20 −10 0 10 20 30
ω

Figure 6: |Y (ω)|

Exercise 10.

10.a. The length of x(t) is 600 seconds. Consequently, the signal x[n] = x(t = nTs ) will
contain 600
Ts
= 300ω
π
s
samples. We see thus that the number of samples increases with ωs .

10.b. We will engrave the sample of the signal z[n] generated with the following device:

19
1.5

0.5

0
−30 −20 −10 0 10 20 30
ω

Figure 7: |Ts Ys (ω)|


1.5

0.5

0
−30 −20 −10 0 10 20 30
ω

Figure 8: |Yrec(ω)| (solid) |Xrec (ω)| (dotted)

x(t) z(t) z[n]


Filter F(ω) Sampling

The anti-aliasing filter F (ω) is given by



1 for −120000 ≤ ω ≤ 120000
F (ω) =
0 elsewhere

and the sampling is done at a sampling frequency ωs = 240000 rad/s.

Why is this the best way. First of all, note that, by Shannon’s theorem, the sampling
frequency ωs should be al least two times larger than the highest hearable frequency in order
to keep this hearable information intact into the sampled signal. Thus:

ωs ≥ 240000

On the other hand, the smaller ωs , the smaller the number of samples in z[n]. Since we wish
to have the smallest number of samples, we thus choose ωs = 240000 rad/s.

The anti-aliasing filter removes from the signal x(t) all frequency component higher than
the hearable frequencies. Consequently, if you listen to z(t) and x(t), you do not hear any

20
difference. However, removing these frequencies are completely necessary in order to guar-
antee that the sampling of z(t) occurs without aliasing i.e. to avoid that high frequency
components are mirrored into the low frequency components and modify these components
such as has been the case in Figure 3.

Note that, in the actual CD technology, ωs is not chosen equal to 240000, but to 260000
rad/s.

Conversion table.

THIRD edition SECOND edition


p. 14 p. 19
p. 15 p. 20
p. 16 p. 21
p. 244 p. 233
p. 465 p. 486
Table 3.2, p. 144 Table 4.2, p. 192
Table 4.1, p. 177 Table 7.1, p. 308
Table 4.2, p. 178 Table 7.2, p. 309
(4.2) (7.2)
(4.5) (7.5)
Figure 5.17, p. 244 Figure 5.24, p. 234

21
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 5:
SPECTRAL ANALYSIS and FILTERING WITH DISCRETE-TIME SYSTEMS (A)

Spectral analysis

Preliminary. Consider the discrete-time signal wN [n] depicted in the top-left part of Ta-
ble 1:

1 for 0 ≤ n ≤ N − 1
wN [n] =
0 elsewhere

a. Show that the DTFT WN (Ω) of wN [n] is given by:

1 − e−jΩN
WN (Ω) =
1 − e−jΩ
2πk
b. Show that, in its main interval, the DTFT WN (Ω) is equal to 0 for all Ω = N
with k
an integer 6= 0.

The modulus of WN (Ω) is represented in its main interval in Table 1 for N = 20, N = 50
and N = 200. In this figure, we observe the following:

• |WN (Ω)| is made up of a central lobe of amplitude N and of width N
. It presents also
smaller secondary lobes of width 2πN
.

• All these lobes (and in particular the central one) become narrower for increasing values
of N.1
• The amplitudes of the secondary lobes decrease for increasing value of |Ω|

• When we compare this decrease for different values of N, this decrease is faster for
larger N.

Exercise 1. (inspired by J.W. Wingerden et al., Wind Energy, Wiley, 2008).

Description. The blades of wind turbines (see the left part of Table 2) are subject to strong
vibrations which induce sinusoidal strains in the blade. To enhance the life expectancy of
the turbine, it is very important to compensate these vibrations using a feedback controller.
In order to be able to design such a controller, it is absolutely necessary to know at which
frequencies these vibrations occur. To obtain this information, the blade is experimented
in a wind tunnel under realistic circumstances and the strain x(t) in the blade is measured.
Suppose that this strain x(t) is given by:

x(t) = A1 sin(ω1 t) + A2 sin(ω2 t) + A3 sin(ω3 πt)


1
This behaviour observed in Table 1 follows from the fact that the width of the lobes are inversely
proportional to N . See the first item.

1
25
2

1.8

20
1.6

1.4 N

15
1.2

|W(Ω)|
w[n]

10
0.8

0.6

5
0.4

0.2
4π/N

0 0
0 5 10 15 20 25 30 −3 −2 −1 0 1 2 3
n 2π/N Ω

60

200

50 180

160

40 140

120
W(Ω)

W(Ω)
30
100

80
20
60

40
10

20

0 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Ω Ω

Table 1: Top Left: wN [n] for N = 20; Top Right: Modulus of WN (Ω) for N = 20; Bottom
Left: Modulus of WN (Ω) for N = 50; Bottom Right: Modulus of WN (Ω) for N = 200

with A1 = A2 = 1, A3 = 21 , ω1 = 4π, ω2 = 6π and ω3 = 30π rad/s. This expression is of


course unknown in practice. The only thing we know a-priori is that all important vibrations
have a frequency smaller than 40π rad/s. In the sequel, we will show how we can deduce
the values of the vibration frequencies from the measurement of x(t).

Measurement setup. The strain x(t) is measured as illustrated in the right part of Table 2.
In this picture, the anti-aliasing filter F (ω) is given by:

1 for −40π ≤ ω ≤ 40π
F (ω) =
0 elsewhere

and the sampling occurs with a sampling frequency equal to ωs = 80π rad/s.

a. Are the sampling frequency and the anti-aliasing filter well chosen?

2
blade

x(t) z(t) z[n]


Filter F(ω) Sampling

Table 2: Left: Wind turbine; Right: Measurement device

|Ζ(Ω)|

(π/2) (π) (π) (π) (π/2)


(π) Ω
−π Ω 1Ω 2 Ω3 π

Figure 1: Main interval of |Z(Ω)|

Spectral Analysis. The measured signal z[n] is represented for n = 0...399 and for n = 0...39
in the left parts of Tables 3 and 4, respectively. Based on this time-domain representation,
it is very complicated (say, impossible) to deduce the frequencies of the vibrations and their
number. We will see that it is much more easier to do so by computing and subsequently
inspecting the DTFT of the measured signal. This technique is called spectral analysis.

b. Show that the main interval of the modulus |Z(Ω)| of the DTFT of z[n] is as given in
Figure 1 with
π 3π 3π
Ω1 = = 0.31 Ω2 = = 0.47 Ω3 = = 2.35,
10 20 4
and show that we can deduce the vibration frequencies present in x(t) by inspecting
this DTFT.

3
In order to compute the DTFT Z(Ω) as represented in Figure 1, we need to measure
x(t) during an infinite time. Indeed, the DTFT of a sine is only made up of delta-impulses
if this signal is considered from t = −∞ to t = +∞. This is of course not possible in
1
practice. Suppose that we have measured x(t) for 10 seconds. Since Ts = 40 s, we have
therefore collected 400 samples of z[n]. These 400 samples are represented in the left part
of Table 3. Mathematically, the 400 measured samples can be related to the infinite signal
z[n] as follows:


z[n] for 0 ≤ n ≤ 399
z400 [n] =
0 elsewhere
= z[n] wN =400 [n]

with ωN [n] as defined in the preliminary. Using the Matlab function fft, we compute the
DTFT Z400 (Ω) of these 400 samples i.e. the DTFT of z400 [n]. The modulus of Z400 (Ω) is
represented for the interval [0 π] in the right part of Table 3.

c. Explain the shape of |Z400 (Ω)|. Use for this purpose the preliminary exercise and
recall that the DTFT of x[n]sin(Ω0 n) is 2j (X(Ω + Ω0 ) − X(Ω − Ω0 )) where X(Ω) is
the DTFT of X(Ω).

d. Can we deduce the vibration frequencies present in x(t) by inspecting |Z400 (Ω)|?

e. Can we estimate the amplitude Ai (i = 1..3) of the vibrations by inspecting |Z400 (Ω)|?

Suppose now that we have only measured x(t) during one second i.e. that we have only
collected 40 samples of z[n]:


z[n] for 0 ≤ n ≤ 39
z40 [n] =
0 elsewhere
= z[n] wN =40 [n]

The measured signal z40 [n] and the modulus |Z40 (Ω)| of its DTFT Z40 (Ω) are represented
in Table 4. Compared to the case N = 400, we observe that it is in this case much more
complicated to deduce information about x(t) by inspecting |Z40 (Ω)|.

f. Explain this phenomenon.

The situation described until now is not really realistic. Indeed, the measurement is never
done without measurement noise. Suppose that z[n] is in fact given by:

z[n] = z(t = nTs ) + v[n]

where the measurement noise v[n] is a so-called white noise (see session 6) of power equal
to 1. For the moment, the only thing we have to know about this noise is that its amplitude
at each n is commonly smaller than 2 and that the DTFT of v[n] wN [n] is at each Ω of the

4

order of N times the power of the white noise.

N = 400 samples of the noisy z[n] have been collected. The measured signal is represented
in the left part of Table 5. We see that the signal is completely corrupted by the noise.
It is thus now really completely impossible to determine the vibration frequencies from the
time-domain representation. However, as can be seen in the right part of Table 5, the DTFT
of this noisy measurement presents a very similar shape as the one represented in Table 3
and, consequently, the vibration frequencies can still be easily determined.

g. Explain this phenomenon.

3
200

150
1
zmeas[n]

0
100

−1

50

−2

−3 0
0 50 100 150 200 250 300 350 400 0 0.5 1 1.5 2 2.5 3
n Ω

Table 3: Left: z400 [n]; Right: |Z400 (Ω)| in the interval [0 π]

3 25

2
20

1
15
[n]
meas

0
z

10

−1

5
−2

−3 0
0 5 10 15 20 25 30 35 40 0 0.5 1 1.5 2 2.5 3
n Ω

Table 4: Left: z40 [n]; Right: |Z40 (Ω)| in the interval [0 π]

5
5

200
4

2 150

1
zmeas[n]

0
100

−1

−2

50
−3

−4

−5 0
0 50 100 150 200 250 300 350 400 0 0.5 1 1.5 2 2.5 3
n Ω

Table 5: Left: noisy measurement; Right: Modulus of the DTFT of the noisy measurement

Solutions
Preliminary.

a. The DTFT WN (Ω) of wN [n] is defined by:



X N
X −1
−jΩn
WN (Ω) = wN [n]e = e−jΩn
n=−∞ n=0

Using now the property (4.5) of the book leads to the result.

2πk
b. Let us evaluate WN (Ω) at Ω = N
for an arbitrary k 6= 0:
=1
z }| {
2πk 1 − e−j2πk
WN (Ω = )= 2πk = 0
N 1 − e−j N

Exercise 1.

1.a. We know that the important vibrations have a frequency smaller than 40π rad/s. Con-
sequently, we have to choose ωs at least two times larger than 40 π. This is the case here.
The anti-aliasing filter removes from x(t) all frequencies that could be mirrored by aliasing
into the frequency range of interest i.e. [0 40π].

1.b. We see that x(t) contains three frequencies smaller than 40π. Consequently, the signal
1
z(t) is equal to x(t). Since Ts = 40 s, the signal z[n] is given:

6
π 3π 1 3π
z[n] = sin( n) + sin( n) + sin( n)
10
|{z} 20
|{z} 2 4
|{z}
Ω1 Ω2 Ω3

The DTFT Z(Ω) is thus given by (Table 4.1 of the book):



X
Z(Ω) = j π δ(Ω + Ω1 − 2πk) − π δ(Ω − Ω1 − 2πk) + π δ(Ω + Ω2 − 2πk)...
k=−∞

π π
... − π δ(Ω − Ω2 − 2πk) + δ(Ω + Ω3 − 2πk) − δ(Ω − Ω3 − 2πk)
2 2
Consequently, we can see that, in its main interval [−π π], the modulus of Z(Ω) presents
delta-impulses at Ω’s equal to ±Ω1 , ±Ω2 and ±Ω3 . This is precisely what we observe in
Figure 1.

By inspecting |Z(Ω)|, we can deduce that z[n] and z(t) contain three frequencies. We can
therefore also conclude that x(t) contains three frequencies in the frequency range of interest
[0 40π] i.e. the frequency range where the important vibrations occur. At which frequencies
do these vibrations occur? They occur at the frequencies where we observe a peak i.e. Ω1 ,
Ω2 and Ω3 . These are normalized frequencies. Based on these normalized frequencies we can
now deduce the actual frequencies of the signal x(t) by using the relation ω = TΩs :

Ω1 40π
ω1 = = = 4π
Ts 10

Ω2 120π
ω2 = = = 6π
Ts 20

Ω3 120π
ω3 = = = 30π
Ts 4
1.c. Using the expression of z[n] given in item [b] and the property of multiplication by a
sine in Table 4.2 of the book, we see that the DTFT Z400 (Ω) of z[n] wN [n] is equal to:
j
Z400 (Ω) = ( WN =400 (Ω + Ω1 ) − WN =400 (Ω − Ω1 )) + ....
2

j j
... + ( WN =400 (Ω + Ω2 ) − WN =400 (Ω − Ω2 )) + ( WN =400 (Ω + Ω3 ) − WN =400 (Ω − Ω3 ))
2 4
In the interval [0 π], the behaviour is mainly determined by the summation of three shifted
versions of WN =400 (Ω):
−j j j
WN =400 (Ω − Ω1 ) − WN =400 (Ω − Ω2 ) − WN =400 (Ω − Ω3 )
2 2 4

7
Since N is here very large, we can conclude, based on what has been observed in the pre-
liminary exercise, that the secondary lobes of WN =400 (Ω) are small and that their influence
in the above summation will therefore be negligeable. In other words, the secondary lobes
of e.g. WN =400 (Ω − Ω3 ) do not change the shape of the function WN =400 (Ω − Ω1 ) and
WN =400 (Ω − Ω2 ). Since Ω1 and Ω2 are quite close, the influence of the secondary lobes of
WN =400 (Ω − Ω1 ) on the function WN =400 (Ω − Ω2 ) (and conversely) is relatively larger, but
not really significative. Consequently, in the interval [0 π], the modulus of |Z400 (Ω)| is ap-
proximately three repetitions of the shape of |WN =400 (Ω)|, but centered at three different
frequencies: Ω = Ω1 , Ω = Ω2 and Ω = Ω3 and scaled by a factor 21 or a factor 14 (the “−j”
disappears when taking the modulus).

This scaling factor explains the amplitude of the peaks at Ω = Ωi (i = 1..3) that we
observe in Table 3. The first peak is equal to N2 = 200 since the maximal amplitude of
|WN =400 (Ω − Ω1 )| is N = 400. The same can be said for the second peak. The third peak
has an amplitude of N4 = 100. A (rough) expression of the peak amplitude (when N is large
enough; see later) is N2Ai where Ai is the amplitude of the sine at the frequency Ωi .

1.d. In item [c.], we have used the expression of x(t) to explain the shape of |Z400 (Ω)|. Now,
in practice, we have to go the other way around. We look at |Z400 (Ω)| to deduce the properties
of x(t). For example, the frequencies present in x(t). For this purpose, we proceed as follows.

By inspecting |Z400 (Ω)|, we can e.g. “read” the frequencies where the three peaks appear:

Ω = 0.31

Ω = 0.47

Ω = 2.35

Based on these three normalized frequencies, we can now deduce the actual frequencies of
the signal x(t) by using the relation ω = TΩs :

0.31
ω= = 12.4 ≈ ω1
Ts

0.47
ω= = 18.8 ≈ ω2
Ts

2.35
ω= = 94 ≈ ω3
Ts
X.e. By reading the amplitude of the three peaks, we can also deduce the amplitude of the
vibrations at the three frequencies. For this purpose, we can use the relation deduced in
item [c.]. The amplitude of the first two peaks is 200 which corresponds to an amplitude of
1 for the corresponding sine function in x(t). The amplitude of the third peak is 100 which

8
corresponds to an amplitude of 0.5 for the corresponding sine function in x(t).

1.f. The DTFT Z40 (Ω) is here given by (see item [c.])
j
Z40 (Ω) = ( WN =40 (Ω + Ω1 ) − WN =40 (Ω − Ω1 )) + ....
2

j j
... + ( WN =40 (Ω + Ω2 ) − WN =40 (Ω − Ω2 )) + ( WN =40 (Ω + Ω3 ) − WN =40 (Ω − Ω3 ))
2 4
Since N is here much smaller than in item [c.], we can conclude, based on what has been
observed in the preliminary exercise, that the secondary lobes of WN =40 (Ω) are much larger
than the ones for the case N = 400. These secondary lobes therefore play a much more
important role in the summation above. Due to these important secondary lobes, it is much
more difficult to determine from |Z40 (Ω)| the number of sine functions in x(t). Indeed, from
|Z40 (Ω)|, we could conclude that x(t) contains four sine functions while it actually only con-
tains three of them.

1.g. That the modulus of the DTFT Z400,noisy (Ω) of the noisy measurement presents a very
similar shape
√ as |Z400 (Ω)| can be explained by the fact the DTFT of the noise part is of am-
plitude ≈ N = 20 which is much smaller than the amplitudes of the peaks. Consequently,
the disturbance induced by the noise on the DTFT is not significative.

Remark. In order to reduce the influence of the noise even more, spectral analysis is
generally performed by representing the so-called periodogram instead of the modulus of the
DTFT. The periodogram is defined as:
1
|ZN,noisy (Ω)|2
N
1
which in this case delivers 400 |Z400,noisy (Ω)|2 . The periodogram is represented in Figure 2.
In the case of the periodogram, the modulus of the DTFT of the noise is of amplitude ≈ 1
N A2
and the amplitude of the peaks 4 i .

Conversion table.

THIRD edition SECOND edition


Table 4.1, p. 177 Table 7.1, p. 308
Table 4.2, p. 178 Table 7.2, p. 309
(4.5) (7.5)

9
100

80

60

40

20

0
0 0.5 1 1.5 2 2.5 3

1
Figure 2: 400
|Z400,noisy (Ω)|2 in the interval [0 π]

10
Course: WB2310 Systeem- en Regeltechniek 3 (2006-2007)
Exercise session 5:
SPECTRAL ANALYSIS and FILTERING WITH DISCRETE-TIME SYSTEMS (B)

Computers nowadays play a major role in many engineering fields. This is also true for the
field of signal analysis and signal processing. From this point of view, discrete-time signals
are very important since they are the only ones that can be treated by a computer. As
seen in the Exercise 1 of this session, in order to perform the spectral analysis of a physical
(continuous-time) signal x(t), we must first sample it and then use the computer to achieve
the analysis. Computers1 can also be very useful to transform an initial signal x[n] (the
input) into another signal y[n] with desired properties (the output):

• Such a transformation can e.g. be the removal of high-frequency components via a


low-pass filter. For this purpose, computers are not compulsory, the filtering operation
can also be done in continuous time. However, it is so that filtering is much easier
to implement in a computer environment (i.e. via lines of code) than via an analog
circuit. Moreover, it is much more easier to design (almost) ideal filters in discrete
time than it is the case in continuous time.

• Another possible application is digital control. Due to their simple implementation, the
majority of the control laws are indeed nowadays programmed into computers. Such
digital controllers compute the control input u[n] that has to be applied to a real-life
system based on a sampled version y[n] = y(t = nTs ) of the actual output y(t) of this
system. Note that the discrete-time u[n] cannot be directly applied to the system. It
has first to be transformed into a continuous system via a digital-analog conversion
(see the third part of this session for more details)

The transformation of a discrete-time signal x[n] into a new signal y[n] can be described by
the theory of discrete-time systems. A discrete-time system is a system which has a discrete-
time input signal x[n] and a discrete-time output signal y[n]. While continuous-time systems
are described by a set of differential equations, discrete-time systems are described by a set
of difference equations relating the input signal x[n] and the output signal y[n]. An example
of difference equations is as follows:

y[n] = −0.5y[n − 1] + x[n − 1] (1)


The output of the discrete-time system for a given input x[n] can be computed by solving the
difference equations for each n. Even though this is effectively how the output is computed
by a computer, this method is certainly not the only one to determine y[n] from x[n] and,
furthermore, the difference equation representation does not say much on the properties of
the system. It is e.g. very difficult to see whether the system in question is a low-pass filter
or something else. The theory of the Discrete-Time Fourier Transform (DTFT) allows us
to gain insights in the behaviour of the discrete-time system without having to solve the
1
We here use the notion of “computer” in a very broad sense. This notion includes any digital devices
such as MP3, CD player, GSM,..

1
difference equation and provides alternative methods to compute the output y[n].

An important tool for this purpose is the frequency response H(Ω) of the discrete-time
system. To determine H(Ω), we apply the shift in time property of the DTFT (see Table 4.2
p. 178) to the difference equation(s). This delivers an expression of the DTFT Y (Ω) of the
output y[n] as a linear function of the DTFT X(Ω) of the input x[n]. Then, H(Ω) is just:

Y (Ω)
H(Ω) = expression (5.60) of the book
X(Ω)

For example, suppose that a discrete-time system is described by the difference equa-
tion (1). This equation can be rewritten using the shift in time property as Y (Ω) =
−0.5 e−jΩ Y (Ω) + e−jΩ X(Ω). Y (Ω) is thus equal to the following function of X(Ω):
Y (Ω) = (e−jΩ /(1 + 0.5e−jΩ ))X(Ω). The frequency response H(Ω) of the system is thus:
e−jΩ
1+0.5e−jΩ
.

Such as Y (Ω) and X(Ω), the frequency response H(Ω) of a discrete-time system is also pe-
riodic with period 2π. Consequently, all the information is located in its main interval [−π π].

The frequency response H(Ω) is a very important function. There are mainly three reasons
for this.

• The frequency response H(Ω) allows to determine the (steady-state) response y[n] of
the system when the input x[n] is given by x[n] = Acos(Ω0 n + θ) (−∞ ≤ n ≤ +∞).
The response is then

y[n] = A|H(Ω0 )|cos(Ω0 n + θ + ∠H(Ω0 )) expression (5.65) of the book

The amplitude of x[n] is multiplied by the modulus of H(Ω) evaluated at the frequency
of x[n] i.e. Ω = Ω0 . The phase of x[n] is shifted with the argument of H(Ω) at Ω = Ω0 .
See p. 250 for more details. The result also holds for sine functions.

• More generally, because of the relation Y (Ω) = H(Ω)X(Ω), we get insights in the
behaviour of the system by looking at the frequency response H(Ω). For example, in
its main interval, a low-pass filter should have a frequency response H(Ω) equal to 1
for frequency up to the cut-off frequency and a frequency response which is very small
for all the other frequencies.

• In the first item, we have seen that the frequency response gives a direct way to compute
the output y[n] of the system when the input x[n] is a (co)sine function. For some
other input signals, the frequency response H(Ω) provides a method to compute the
output y[n] which can be easier than the use of the difference equations. This method
is based on the relation Y (Ω) = H(Ω)X(Ω) and is as follows: determine the Fourier
transform X(Ω) of x[n]. With X(Ω), determine Y (Ω) by multiplying H(Ω) and X(Ω):
Y (Ω) = H(Ω)X(Ω). The output signal y[n] can then be determined by finding the
inverse Fourier transform of Y (Ω) (see (4.27) and the remark below).

2
Remark. (Inverse) DTFTs can be computed via their respective definition (4.2) and (4.27).
Instead, they can be deduced via standard Fourier transform pairs (see Table 4.1 on the
page 177) in combination with properties of the Fourier transforms (see Table 4.2 on page
178).

Another important quantity is the inverse DTFT h[n] of H(Ω) i.e. h[n] = F −1 (H(Ω)). The
signal h[n] is called the pulse response of the discrete-time system since h[n] is the output
signal of the discrete-time system described by H(Ω) when the input signal x[n] is the unit
pulse function δ[n] (see page 15). From the pulse response h[n], it can be determined whether
the system is stable and/or causal. The system is indeed stable if and only if h[n] tends to
0 when n tends to +∞. The system is causal if and only if h[n] = 0 for all n < 0. These
properties come from the fact that the output y[n] of a system to an input x[n] can also be
expressed as the convolution of h[n] with x[n]:
+∞

X
y[n] = h[n] ∗ x[n] = h[i] x[n − i] expression (5.58) of the book
i=−∞

Z-transform - Transfer function. Besides its frequency response H(Ω), a discrete-time


system can be represented by a transfer function H(z). The transfer function H(z) is
∆ P
defined as the ratio of the Z-transform Y (z) = +∞
n=−∞ y[n]z
−n
of the output signal and the
∆ P+∞
Z-transform of the input signal X(z) = n=−∞ x[n]z −n (see Chapter 7)
+∞
X
y[n]z −n
Y (z) n=−∞
H(z) = = +∞
X(z) X
x[n]z −n
n=−∞

There is a simple relation between H(Ω) and H(z):

H(Ω) = H(z = ejΩ )


e−jΩ
Thus for example, the transfer function H(z) corresponding to H(Ω) = 1+0.5e−jΩ
is:

z −1 1
H(z) = −1
=
1 + 0.5z z + 0.5
It is possible to deduce the transfer function H(z) from a difference equation such as (1)
without deducing first the frequency response H(Ω). For this purpose, we use the shift
in time property of the Z-transform which states that the Z-transform of y[n − c] (with
c an integer) is equal to z −c Y (z) with Y (z) the Z-transform of y[n]. Using this prop-
erty, equation (1) leads to Y (z) = −0.5z −1 Y (z) + z −1 X(z). Consequently, we obtain that
z −1
H(z) = Y (z)/X(z) = 1+0.5z −1 .

Based on H(z), we can also verify whether the discrete-time stable is stable: a system is
stable if and only if all the poles of H(z) are located in the unity circle i.e. if and only if

3
the modulus of these poles are all strictly smaller than one. For the H(z) given above, we
see that z + 0.5 = 0 for z = −0.5 whose modulus is smaller than one. Thus the system
corresponding to H(z) is stable. The equivalent of H(z) for continuous-time systems is the
transfer function H(s) in the Laplace domain.

The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.

Problem 2 (examination January 2006). Following the diagram as sketched in Figure 1,


a continuous-time signal x(t) is sampled with a sampling period Ts = 0.04 s. This delivers a
discrete-time signal x[n]. Before further use in the computer, the signal x[n] is filtered with
a digital filter whose frequency response H(Ω) is represented in a Bode plot in Figure 2. The
output of this filter is denoted y[n].

x(t) Ts x[n] y[n]


Filter

Figure 1: Filtering of sampled signal.

1.0

0.8
|H|

0.6

0.4
−π −0.5π 0 0.5π π

0
Angle(H), degrees

−200

−400

−600
π −0.5π 0 0.5π π

Figure 2: Frequency response of discrete-time filter.

a. Suppose that x(t) = 2sin(ω0 t) with ω0 = 10π rad/sec. Determine the sampled signal
x[n].

4
b. Determine y[n] for the signal x[n] found in item [a.].

Problem 3. Consider two discrete-time systems described by the following difference equa-
tions:

y[n] = −ay[n − 1] + x[n − 1] (2)


y[n] = bx[n − 1] + 2x[n − 2] (3)
with a ∈ R and b ∈ R

a. Determine y[n] for 0 ≤ n ≤ 3 for these two systems when x[n] is given by the unit
pulse δ[n] (see page 15) i.e.

1 for n = 0
x[n] =
0 elsewhere

Suppose y[n] = 0 for n < 0.

b. Determine the frequency response H(Ω) of these two discrete-time systems.

c. Determine the transfer function H(z) corresponding to these two discrete-time systems.

d. Give the condition(s) that a and b have to fulfill to ensure that both discrete-time
systems are stable.

e. Suppose a and b fulfill the condition(s) found in item [d.]. Determine the pulse response
of the two systems using what has been found in [b.]. Compare these pulse responses
with what has been found in item [a.].

Problem 4 (examination August 2006). Consider the discrete-time system whose


pulse response h[n] is given by:

2 f or 1 ≤ n ≤ 4
h[n] =
0 elsewhere

a. Is this system causal? Is this system stable? Motivate your answer.

b. Determine the frequency response H(Ω) of the considered system.

c. Compute, for all values of n, the output y[n] of this system when the input x[n] is
equal to sin( π2 n)

d. Same question as in item [c.] when



1 f or n = 0 and n = 1
x[n] =
0 elsewhere

Use here the convolution expression (5.58).

5
e. What is the energyPof the signal y[n] found in item [d.]? Hint. The energy of a signal
y[n] is defined as +∞
n=−∞ y 2
[n].

Problem 5 (examination November 2005). Assume that the signal h[n] can be written
as the sum of the signals a[n] and b[n]:
h[n] = a[n] + b[n].
The signals a[n] and b[n] are given by:
a[n] = 3 δ[n − 1] with δ[n] the unit pulse (see page 15)
b[n] = −3 cn−1 u[n − 1] with u[n] the discrete-time unit step function (see page 14)
with |c| < 1 a known constant.

a. Determine the DTFT of h[n].

Assume that h[n] is the pulse response of a discrete-time system.

b. Determine the transfer function H(z) of this system.


c. If we denote the input and output signals of this system by x[n] and y[n], what is the
difference equation relating y[n] and x[n]?
d. Assume that the input to this system x[n] = dn u[n], with d a known constant (|d| < 1).
Determine the output y[n] of this system. Use for this purpose the hint below.

Hint (partial fraction decomposition): We can decompose the frequency function


1+γe−jΩ
(1+αe−jΩ )(1+βe−jΩ )
(α, β, γ ∈ R) as follows:

1 + γe−jΩ δ ǫ α−γ β−γ


= − with δ = and ǫ =
(1 + αe−jΩ )(1 + βe−jΩ ) 1 + αe−jΩ 1 + βe−jΩ α−β α−β
To show this relation, use the same procedure as in the hint of Exercise 6 of Session 3.

Problem 6. Consider the discrete-time system described by the following transfer function:
3z
H(z) =
z2 − z + 0.5
a. Is this system stable. Motivate your answer.
b. If we denote the input and output signals of this system by x[n] and y[n], what is then
the difference equation relating y[n] and x[n]?

Problem 7 (examination February 2008). A continuous-time signal x1 (t) is given by

x1 (t) = xB (t)cos(10πt)
where the signal xB (t) is band-limited with a bandwidth equal to 30π rad/s i.e. the Fourier
transform XB (ω) of xB (t) is such that XB (ω) = 0 for |ω| > 30πrad/s.

6
a. Show that the signal x1 (t) is also band-limited and that X1 (ω) = 0 for all |ω| > 40π
rad/s.

b. Suppose that the signal x1 (t) is sampled and that this sampling delivers the discrete-
time signal x1 [n] = x1 (t = nTs ) with Ts the sampling period. What is the maximum
sampling period Ts to ensure that the DTFT X1 (Ω) of the sampled signal x1 [n] is, in its
main interval [−π π], a non-distorted image of the continuous-time Fourier transform
X1 (ω) of x1 (t).

Now suppose that the continuous-time signal x1 (t) (which is the signal of interest) is
subject to an additive disturbance x2 (t) due to the electricity network and that we actually
measure the signal x(t):
=x2 (t)
z }| {
x(t) = x1 (t) + cos(100πt)

In order to be able to filter away the disturbance, the signal x(t) is measured via the mea-
surement device described in the figure below:

x(t) x[n] z[n]


Sampling Filter F(Ω)

In this figure, we see that the signal x(t) is first sampled with a given sampling period Ts .
This delivers the sampled signal x[n] = x(nTs ). Subsequently, the sampled signal is filtered
by an ideal discrete-time filter F (Ω). The frequency response of this filter in its main
interval [−π π] is given by

1 for −Ω0 ≤ Ω ≤ Ω0
F (Ω) =
0 elsewhere

for some Ω0 . As already mentioned, the objective of the measurement device is to ensure
that the DTFT Z(Ω) of the discrete-time signal z[n] is, in its main interval [−π π], a non-
distorted image of the continuous-time Fourier transform X1 (ω) of x1 (t).

1
c. Can we achieve this objective if Ts = 40 s? If yes, determine a possible value for Ω0 in
the expression of F (Ω) to achieve this objective.
1
d. Same question as in item [c.], but for Ts = 80
s
1
e. Same question as in item [c.], but for Ts = 120
s

7
Solutions.
Problem 2.

2.a. The signal x[n] is given as follows:



x[n] = x(nTs ) = 2 sin(ω0 nTs ) = 2 sin(10πn(0.04)) = 2 sin( n)
5

2.b. The output y[n] can be directly deduced from expression (5.65) in the book. For this
purpose, we need to determine the modulus and the argument of H(Ω) evaluated at the
frequency of x[n] i.e. Ω0 = 2π5
= 0.4π rad/s. From the bode plot of H(Ω) in Figure 2, we
read that |H(0.4π)| ≈ 0.62 and ∠H(0.4π) ≈ −460 degrees Thus ∠H(0.4π) ≈ −100 degrees
which is also equivalent to -1.74 rad. Consequently, we obtain:
2π 2π
y[n] = 2 (0.62) sin( n − 1.74) = 1.24 sin( n − 1.74)
5 5

Problem 3.

3.a. Using the first difference equation, we obtain:

y[n = 0] = −ay[−1] + x[−1] = 0 + 0 = 0

y[n = 1] = −ay[0] + x[0] = 0 + 1 = 1


y[n − 2] = −ay[1] + x[1] = −a + 0 = −a
y[n − 3] = −ay[2] + x[2] = −a(−a) + 0 = a2
Using the second difference equation, we obtain:

y[n = 0] = bx[−1] + 2x[−2] = 0 + 0 = 0

y[n = 1] = bx[0] + 2x[−1] = b + 0 = b


y[n − 2] = bx[1] + 2x[0] = 0 + 2 = 2
y[n − 3] = bx[2] + 2x[1] = 0 + 0 = 0
3.b. Using the shift in time property of the DTFT on the first difference equation, we obtain:

Y (Ω) e−jΩ
Y (Ω) = −a e−jΩ Y (Ω) + e−jΩ X(Ω) =⇒ H(Ω) = =
X(Ω) 1 + ae−jΩ
Doing the same for the second difference equation, we obtain:
Y (Ω)
Y (Ω) = be−jΩ X(Ω) + 2e−j2Ω X(Ω) =⇒ H(Ω) = = be−jΩ + 2e−j2Ω
X(Ω)

8
3.c. Using the relation between H(z) and H(Ω), we obtain that the transfer function of the
z −1 −j2Ω
first system is H(z) = 1+az −1 . Noting that e = (e−jΩ )2 , we obtain the transfer function
of the second system: H(z) = bz −1 + 2z −2 .

Equivalently, the transfer functions of the system can directly be deduced from the dif-
ference equations. Indeed, using the shift in time property of the Z-transform on the first
difference equation, we obtain:
Y (z) z −1
Y (z) = −a z −1 Y (Ω) + z −1 X(Ω) =⇒ H(z) = =
X(z) 1 + az −1
Doing the same for the second difference equation, we obtain:
Y (z)
Y (z) = bz −1 X(z) + 2z −2 X(z) =⇒ H(z) = = bz −1 + 2z −2
X(z)
3.d. For the first system, H(z) can be rewritten as:
z −1 1
H(z) = −1
=
1 + az z+a
The transfer function H(z) has thus one pole in z = −a. Consequently, the condition for
stability is that the modulus of a is strictly smaller than 1: |a| < 1 (the modulus of a real
number is equivalent to its absolute value). The transfer function H(z) of the second system
can be rewritten as:
bz + 2
H(z) =
z2
This transfer function has two poles in z = 0 and is thus stable for all values of b.

3.e. The pulse response is by definition the inverse DTFT of the frequency response
H(Ω). The frequency response H(Ω) of the first system can be written as e−jΩ Z(Ω)
with Z(Ω) = 1/(1 + ae−jΩ ). Consequently, using the shift in time property, we have
that h[n] = z[n − 1] with z[n] the inverse DTFT of Z(Ω). Using Table 4.1, we see that
z[n] = (−a)n u[n] since |a| < 1. Consequently, the pulse response of the first system is
h[n] = (−a)(n−1) u[n − 1].

Consequently, we have for 0 ≤ n ≤ 3 that


−1
h[0] = (−a)−1 u[−1] = 0=0
a

h[1] = (−a)0 u[0] = 1

h[2] = (−a)1 u[1] = −a

h[3] = (−a)2 u[2] = a2

9
This is precisely what we found in item [a.]. This is perfectly logical since the pulse response
h[n] is by definition the output response y[n] of a filter H(Ω) when x[n] = δ[n].

The pulse response of the second system can be deduced as follows. The frequency
response H(Ω) being the DTFT of h[n], we can write that:
+∞
X
H(Ω) = h[n]e−jΩn
n=−∞

Now, H(Ω) = be−jΩ + 2e−j2Ω . Consequently, we see by inspection that h[n] is given by:

 b for n = 1
h[n] = 2 for n = 2
0 elsewhere

which is exactly what we found in item [a.].

Problem 4.

4.a. The discrete-time system is causal since h[n] = 0 ∀n < 0 and it is stable since h[n] = 0
for all n larger than 4.

4.b. The frequency response of the considered system is by definition the DTFT of the pulse
response h[n] of the system:
4
X e−jΩ − e−j5Ω
H(Ω) = 2 e−jΩn = 2
n=1
1 − e−jΩ

where we made use of formula (4.5) in the book.

4.c. First notice that:



e−j 2 − e−j
π
π 2 −j + j
H(Ω = ) = = =0
1 − e−j 2
π
2 1+j
Using formula (5.65) in the book, we then conclude that y[n] = 0.

4.d. The output y[n] is given by (see formula (5.58) in the book)

X 4
X
y[n] = h[i] x[n − i] = h[i] x[n − i]
i=−∞ i=1

We know that y[n] = 0 for n < 0 since the system is causal. Let us apply the formula for
n ≥ 0:
4
X
y[n = 0] = h[i] x[−i] = h[1]x[−1] + h[2]x[−2] + h[3]x[−3] + h[4]x[−4] = 0 + 0 + 0 + 0 = 0
i=1

10
4
X
y[n = 1] = h[i] x[1−i] = h[1]x[0]+h[2]x[−1]+h[3]x[−2]+h[4]x[−3] = 2∗1+0+0+0 = 2
i=1

4
X
y[n = 2] = h[i] x[2−i] = h[1]x[1]+h[2]x[0]+h[3]x[−1]+h[4]x[−2] = 2∗1+2∗1+0+0 = 4
i=1

4
X
y[n = 3] = h[i] x[3 −i] = h[1]x[2]+ h[2]x[1]+ h[3]x[0]+ h[4]x[−1] = 0 + 2 ∗ 1 + 2 ∗ 1 + 0 = 4
i=1

4
X
y[n = 4] = h[i] x[4 − i] = h[1]x[3] + h[2]x[2] + h[3]x[1] + h[4]x[0] = 0 + 0 + 2 ∗ 1 + 2 ∗ 1 = 4
i=1

4
X
y[n = 5] = h[i] x[5 − i] = h[1]x[4] + h[2]x[3] + h[3]x[2] + h[4]x[1] = 0 + 0 + 0 + 2 ∗ 1 = 2
i=1

4
X
y[n = 6] = h[i] x[6 − i] = h[1]x[5] + h[2]x[4] + h[3]x[3] + h[4]x[2] = 0 + 0 + 0 + 0 = 0
i=1

and y[n] remains equal to 0 for all larger values of n. To sum up:

 2 for n = 1 and n = 5
y[n] = 4 for 2 ≤ n ≤ 4
0 elsewhere

4.e. The energy of y[n] can be computed as:



X
y 2[n] = 0 + (2)2 + (4)2 + (4)2 + (4)2 + (2)2 + 0 = 56
n=−∞

Exercise 5

5.a. Using the linearity property of the DTFT we can write that H(Ω) = A(Ω) + B(Ω) with
A(Ω) and B(Ω) the DTFTs of a[n] and b[n] respectively. Using table 4.1, we have that:

A(Ω) = 3 e−jΩ

If we define v[n] = cn u[n], then we observe that b[n] = −3v[n − 1]. Consequently, B(Ω) =
−3e−jΩ V (Ω) with V (Ω) the DTFT of v[n]. Since v[n] = cn u[n], V (Ω) is thus 1/(1 − ce−jΩ ).
Consequently:

11
−3 e−jΩ
B(Ω) =
1 − c e−jΩ
We have thus:

3e−jΩ
H(Ω) = 3e−jΩ −
1 − ce−jΩ
3e−jΩ (1 − ce−jΩ ) − 3e−jΩ
=
1 − ce−jΩ
−3 c e−2jΩ
=
1 − c e−jΩ
5.b. The transfer function H(z) can be constructed by replacing each term ejΩ in H(Ω)
by z:
−3 c z −2
H(z) =
1 − c z −1
5.c. We can deduce the difference equation both via the frequency response H(Ω) or the
transfer function H(z). Here, we do it using the frequency response. We know that Y (Ω) =
H(Ω)X(Ω) with Y (Ω) and X(Ω) the DTFT of y[n] and x[n]. Consequently, we have:
(1 − c e−jΩ ) Y (Ω) = −3 c e−2jΩ X(Ω)

=⇒

Y (Ω) − c e−jΩ Y (Ω) = −3 c e−2jΩ X(Ω)


Applying the inverse DTFT on the last equation, we obtain the difference equation of the
system:
y[n] − c y[n − 1] = −3 c x[n − 2].

5.d. We can answer to this question both using the frequency response H(Ω) or the
transfer function H(z). Here, we do it using the frequency response. We know that
Y (Ω) = H(Ω)X(Ω) with Y (Ω) and X(Ω) the DTFT of y[n] and x[n]. Consequently,

X(Ω)
z }| {
−3 c e−2jΩ 1
Y (Ω) =
1 − c e−jΩ 1 − de−jΩ
 
−2jΩ 1
= −3 c e
(1 − c e )(1 − de−jΩ )
−jΩ
!
c d
c−d c−d
= −3 c e−2jΩ −
1 − c e−jΩ 1 − d e−jΩ
| {z }
V (Ω)

12
where for the last step partial fraction decomposition was used. If we denote v[n] the inverse
DTFT of V (Ω) as defined above, the output signal y[n] is equal to −3 c v[n − 2] (see Table
4.2). Now, using Table 4.1., we have that:

c d 1
cn u[n] − dn u[n] = c(n+1) − d(n+1) u[n]

v[n] =
c−d c−d c−d

−3 c (n−1)
− d(n−1) u[n − 2]

=⇒ y[n] = −3 c v[n − 2] = c
c−d

Exercise 6.

6.a. The transfer function H(z) can be rewritten as:


3z 3z
H(z) = = 1+j 1−j
z2 − z + 0.5 (z − 2
)(z − 2
)

H(z) has√thus two poles: one in z = 1+j2


and one in z = 1−j
2
. Both poles has a modulus
equal to 22 which is smaller than one. Consequently, H(z) is stable.

6.b. The transfer function can be rewritten as:


3z 3z −1
H(z) = =
z 2 − z + 0.5 1 − z −1 + 0.5z −2

We know that Y (z) = H(z)X(z) with Y (z) and X(z) the Z-transforms of y[n] and x[n].
This leads to

(1 − z −1 + 0.5z −2 ) Y (z) = 3z −1 X(z)

Y (z) − z −1 Y (z) + 0.5z −2 Y (z) = 3z −1 X(z)

Using the shift in time property of the Z-transform, the inverse Z-transform of this equation
is:

y[n] − y[n − 1] + 0.5y[n − 2] = 3x[n − 1]

which is the difference equation of the system.

Exercise 7.

7.a. Using the property of multiplication by a cosine for the continuous-time Fourier Trans-
form, we have that X1 (ω) = 21 (XB (ω − 10π) + XB (ω + 10π)). Consequently, we see that for

13
all |ω| > 40π rad/s, X1 (ω) = 0 since XB (ω) = 0 for all |ω| > 30π rad/s.

7.b. Since the signal x1 (t) has a bandwidth equal to 40π rad/s, Shannon theorem tells
us that the minimal sampling frequency is equal to 80π rad/s and thus that the maximal
1
sampling period is Ts = 40 s.

7.c. The signal x[n] is given by x1 [n] + x2 [n] and thus the DTFT X(Ω) of x[n] is equal to
1
X1 (Ω) + X2 (Ω). Now, suppose that Ts ≤ 40 . Under this assumption (see item [b.]), the
DTFT X1 (Ω) will be a non-distorted version of X1 (ω). Moreover, X1 (Ω) will be equal to
zero in its main interval for Ω in the range 40πTs < |Ω| ≤ π since x1 (t) has a bandwidth
1
of 40π rad/s. Now, for Ts = 40 s, this shows that X1 (Ω) is non-zero on the whole main
interval. Let us now take a look at X2 (Ω). For this purpose, we first see that
5 1
x2 [n] = cos(100πnTs ) = cos( πn) = cos( πn)
2 2

X(Ω)

X 1(Ω)

−π 0 π

Figure 3: X(Ω) in its main interval for sub-question 7.c.

Consequently, X2 (Ω) presents delta-impulses at Ω = ± 12 π. Since X1 (Ω) is non-zero on the


whole main interval and thus also for Ω = ± 21 π, X(Ω) can be represented as shown in
Figure 3 where we see that X2 (Ω) “distorts” X1 (Ω). Consequently, it will be impossible to
retrieve a non-distorted image of X1 (Ω) after filtering of X(Ω). The objective can thus not
1
be attained with Ts = 40 s.

1
7.d. When Ts = 80 s, the DTFT X1 (Ω) will be equal to zero for all Ω such that 12 π < |Ω| ≤ π.
The disturbance becomes:
5 3 3
x2 [n] = cos( πn) = cos(− πn) = cos( πn)
4 4 4
Consequently, X2 (Ω) presents delta-impulses at Ω = ± 34 π. There is aliasing, but the mir-
rored disturbance does not fall into the bandwidth of X1 (Ω). Indeed, X1 (Ω = ± 34 π) = 0.
Consequently, X(Ω) can be represented as shown in Figure 4 and we see that it will be
possible to retrieve a non-distorted image of X1 (Ω) after filtering of X(Ω) by choosing Ω0 in
the interval:

14
X(Ω)

X 1(Ω)

−π 0 π

Figure 4: X(Ω) in its main interval for sub-question 7.d.

π 3
≤ Ω0 < π
2 4
1
7.e. When Ts = 120 s, the DTFT X1 (Ω) will be equal to zero for all Ω such that 31 π < |Ω| ≤
π. The disturbance becomes:
5
x2 [n] = cos( πn)
6
Consequently, X2 (Ω) presents delta-impulses at Ω = ± 65 π. Note that there is here not any
aliasing. Since, X1 (Ω = ± 56 π) = 0, we can follow the same reasoning as in item [d.] and it
will be possible to retrieve a non-distorted image of X1 (Ω) after filtering by choosing Ω0 in
the interval:
π 5
≤ Ω0 < π
3 6

Conversion table.

THIRD edition SECOND edition


p. 14 p. 19
p. 15 p. 20
p. 250 p. 332
p. 538 p. 614
Table 4.1 Table 7.1
Table 4.2 Table 7.2
Chapter 7 Chapter 11
(4.2) (7.2)
(4.5) (7.5)
(4.27) (7.27)
(5.58) (7.48)
(5.60) (7.50)
(5.65) (7.55)

15
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 5:
SPECTRAL ANALYSIS and FILTERING WITH DISCRETE-TIME SYSTEMS (C)

Digital-Analog conversion via Zero Order Hold

Digital-Analog (D/A) conversion consists of transforming a discrete-time signal x[n]


(sampling period Ts ) into a continuous-time signal x(t) with a similar frequency content.
Such a conversion is of course necessary when the control input of a real-life system has been
computed by a digital controller. In the Exercise 8, we will give another example where such
conversion is crucial.

In session 4, we have described the ideal Digital-Analog conversion. It was done via the
following relation:
+∞
sin ω2s (t − nTs )

1 X
x(t) = x[n] (1)
π n=−∞ t − nTs
This conversion is ideal in the sense that if x[n] was a sampled version of x(t) i.e. x[n] =
x(t = nTs ), this relation allows to perfectly reconstruct x(t) (provided Shannon’s theorem
was respected during the sampling). The conversion described by (1) corresponds to filtering
the “fictive” signal corresponding to Xs (ω) by the following ideal filter:

Ts for − ω2s ≤ ω ≤ ω2s



Hideal (ω) =
0 elsewhere

where ωs = 2π
Ts
. Recall that Xs (ω) is the image of the DTFT X(Ω) of x[n] in the actual
frequency range:

Xs (ω) = X(Ω = ωTs )

and that Xs (ω) is thus periodic of period ωs since X(Ω) is periodic of period 2π.

In Session 4, we have also observed that the above conversion technique is not usable in
practice. Instead, simpler methods such as the Zero Order Hold (ZOH) mechanism is used
to generate continuous-time signals from discrete-time signals. The continuous-time signal
xZOH (t) generated with the ZOH is equal to x[n] for nTs ≤ t < (n + 1)Ts . This conversion
method is illustrated below for an example.

Example. Consider the signal x(t) = cos(t) + cost(2t) represented in blue solid in the right
part of Table 1. The signal x(t) is sampled with Ts = 0.5 s. The sampled signal x[n] is
represented in the left part of Table 1. The signal xZOH (t) generated with the Zero-Order
Hold is represented in red dashed in the right part of Table 1. Unlike the ideal conversion,
we do not obtain xZOH (t) = x(t). However, for smaller values of Ts , we generally get a pretty
nice image of x(t).

1
Like the ideal conversion, the ZOH conversion can also be understood as a filtering oper-
ation on the “fictive” signal corresponding to Xs (ω). More precisely, the Fourier transform
XZOH (ω) of the continuous-time signal xZOH (t) is given by:
1 − ejωTs
XZOH (ω) = HZOH (ω)Xs (ω) with HZOH (ω) =

We observe that HZOH (ω) of the ZOH mechanism is a function of the sampling period Ts .
For the case Ts = 0.5 such as considered in the example above, the modulus of HZOH (ω) is
represented in the left part of Table 2 where it is compared to the filter corresponding to the
ideal conversion. By inspecting this figure, we observe that HZOH (ω) is also a low-pass filter
but, unlike Hideal (ω), HZOH (ω) does not completely remove the high-frequency components1
of Xs (ω). Consequently, in the example of Table 1, xZOH (t) is not just cos(t) + cos(2t), but
also contains higher frequencies, as can be evidenced by its “block”-shape. Note however
that these high-frequency components in xZOH (t) are smaller for smaller values of Ts .

2.5
2.5

2
2

1.5
1.5

1
1

0.5
0.5

0
0

−0.5
−0.5

−1
−1

−1.5
−1.5

−2
−2

−2.5
0 1 2 3 4 5 6 7 8 9 10
−2.5
0 1 2 3 4 5 6 7 8 9 10 t

Table 1: Left: sampled signal x[n] (Ts = 0.5 s); Right: Signal xZOH (t) (red dashed) compared
to x(t) (blue solid).

Exercise 8. D/A conversion for compact disks. In exercise 10 of session 4, we analyzed


how a piece of music could be stored on a CD. This is done as follows. Denote by z(t)
the music signal. This signal2 is band-limited with a bandwidth equal to 120 krad/s ( 1
krad/s = 1 kilo rad/s = 1000 rad/s). Consequently, we have that:

Z(ω) = 0 for all |ω| < 120 krad/s (2)


In this exercise, we will suppose for simplicity that Z(ω) is entirely real and thus that
|Z(ω)| = Z(ω). Furthermore, we will suppose that Z(ω) has the shape given in the right
1
These high-frequency components are due to the periodicity of Xs (ω); see session 4.
2
As shown in session 4, the signal z(t) is in fact the music signal from which all the frequency components
that cannot be heard have been filtered.

2
1
Ts

0.8
0.8 Ts

0.6
0.6 Ts

0.4 Ts 0.4

0.2 Ts 0.2

0 0
5 10 15 20 25 30 35 40 0 200 400 600 800 1000 1200 1400
ω [rad/s] ω [krad/s]

Table 2: Left: Hideal (ω) (blue dashed) and |HZOH (ω)| (red solid) for the case Ts = 0.5 (i.e.
ωs = 4π). Right: Fourier Transform Z(ω) of the continuous-time music signal z(t) in the
interval [0 1500] krad/s

part of Table 2. In order to be stored on the CD, the signal z(t) is sampled at a sampling

frequency ωs = 260 krad/s (Ts = 260 ms) and the obtained samples z[n] are stored on the
CD. The frequency information in z[n] can be expressed as a function of the normalized
frequencies Ω (the DTFT Z(Ω)) or as a function of the actual frequencies ω. The latter

representation is denoted Zs (ω) = Z(Ω = ωTs ). As shown in session 4, Zs (ω) is equal to
1
P ∞
Ts k=−∞ Z(ω − kωs ). Consequently, Ts Zs (ω) is also entirely real and can be represented
as in the left part of Table 3. The quantity Ts Zs (ω) will be denoted Z̄s (ω) in the sequel.

In this exercise, we will analyze how we can listen to the CD. The discrete-time signal z[n]
must for this purpose somehow been transformed into a continuous-time (electrical) signal
x(t) that can subsequently be amplified by the loudspeaker. The signal x(t) in question must
of course be (almost) equal to the initial music signal z(t) i.e. the one whose sampled version
is stored on the CD. Consequently, it is required that |X(ω)| ≈ |Z(ω)| for all frequency3 .
We could think that it would be sufficient to require that |X(ω)| ≈ |Z(ω)| for all hearable
frequencies i.e. for |ω| < 120 krad/s. However, even though these frequency components
are not hearable, it is nevertheless important that

X(ω) ≈ Z(ω) = 0 for all |ω| > 120 krad/s


or more precisely:

|X(ω)| < 0.005 for all |ω| > 120 krad/s


This extra constraint is due to the fact that the loudspeaker amplifier is a nonlinear operator
which can therefore map high frequency components into lower frequencies.

3
We here restrict attention to the modulus mainly for the sake of simplicity, but also because the phase
deformation due the reconstruction operation can be neglected.

3
1
1

0.8
0.8

0.6 0.6

0.4 0.4

|Y(ω=200 krad/s)|=0.07
0.2
0.2

0
0 0 200 400 600 800 1000 1200 1400
0 200 400 600 800 1000 1200 1400
ω [krad/s]
ω [krad/s] ω=120 krad/s

Table 3: Left: Z̄s (ω) = Ts Zs (ω) corresponding to the discrete-time signal z[n] engraved
on the CD in the interval [0 1500] krad/s (blue solid) and |H̄ZOH (ω)| (red dashed); Right:
|Y (ω)| (blue solid) and modulus |F (ω)| of the second order Butterworth filter F (ω) with
cut-off frequency equal to ω2s = 130 krad/s

To sum up, x(t) is good signal to send to the loudspeaker if


1. |X(ω)| ≈ |Z(ω)| for |ω| < 120 krad/s
2. |X(ω)| < 0.005 for all |ω| > 120 krad/s.
Note that the ideal D/A converter would do the job. However, as said above, such an ideal
conversion is impossible in practice and we will need to use the ZOH mechanism instead.
Let us denote by y(t), the continuous-time signal generated by the ZOH. The modulus

|HZOH (ω)| of the filter corresponding to the ZOH mechanism (Ts = 260 ms) is represented
in red dashed in the left part of Table 3. In fact, in this table, we represent4 the modulus of

H̄ZOH (ω) = T1s HZOH (ω). The Fourier transform Y (ω) of y(t) is given by:

Y (ω) = HZOH (ω)Zs (ω) = H̄ZOH (ω)Z̄s (ω)


and its modulus |Y (ω)| is represented in blue solid in the right part of Table 3.
a. Explain why y(t) cannot be directly sent to the loudspeaker?
Since y(t) itself cannot be sent to the loudspeaker, we could try to filter y(t) using a
continuous-time low-pass filter F (ω) as shown in the left part of Table 4. As shown in
session 3, we could use a Butterworth filter. The order of this filter can not be chosen too
high due to the problems we discussed in Session 3 (see solution 9.d in session 3), but also
because a high order filter is very difficult to implement in practice. The order is here chosen
equal to N = 2 and the cut-off frequency equal to 130 krad/s. The modulus |F (ω)| of the
frequency response of this filter is represented in red dashed in the right part of Table 3.
4
We represent scaled versions of both HZOH (ω) and Zs (ω) to be able to represent them in the same
figure.

4
b. Denoting x(t), the output of F (ω). Explain why x(t) can also not be sent to the
loudspeaker?

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2
z[n] y(t) Continuous - x(t)
ZOH
time filter 0.1
Ts
F(ω)
0
0 500 1000 1500 2000 2500

Table 4: Left: Reconstruction set-up for item [b.]. Right: |H̄ZOH,2(ω)|

In fact, the simple procedure presented above to generate x(t) would perfectly work if ωs
would have been chosen larger. Let us prove it. Suppose thus that the music signal z(t) is

sampled at a sampling frequency ωs,2 = 1040 krad/s (Ts,2 = 1040 ms). The obtained discrete-
time signal z2 [n] = z(t = nTs,2 ) is then the signal which is converted into a continuous-time

signal y2 (t) by using a ZOH mechanism (Ts,2 = 1040 ms). In the right part of Table 4, we
1 ∆
represent the modulus of H̄ZOH,2(ω) = Ts,2 HZOH,2(ω) with HZOH,2(ω), the filter correspond-

ing to this ZOH mechanism (Ts,2 = 1040 ms). The signal x2 (t) is then generated by filtering
the signal y2 (t) with the same Butterworth filter F (ω) as previously i.e. the one represented
in the right part of Table 3 (red dashed).

c. Show that this signal x2 (t) can be sent to the loudspeaker.

Increasing the sampling frequency has an important drawback: it increases the number
of samples that must be stored on the CD and reduces therefore the duration of the music
pieces that can be stored on the CD. We will therefore not change the sampling frequency of
z(t). It remains equal to ωs = 260 krad/s. However, we will find a way to generate the signal
z2 [n] (that would be obtained from z(t) with ωs,2 = 1040 krad/s) from the signal z[n] stored
on the CD (and that has been obtained from z(t) with ωs ). Such an operation will be realized
by a technique called oversampling followed by a discrete-time filtering operation as shown in
Table 5 (digital part). Let us first begin to present the oversampling technique. Oversampling
transforms z[n] (sampling frequency ωs ) into a signal v2 [n] (sampling frequency ωs,2 = 4ωs )5
by inserting three samples equal to zero between each sample of z[n]. This delivers the
discrete-time signal v2 [n]:
5
The subscript “2” in the notation v2 [n] is to stress out that v2 [n] is a discrete-time signal corresponding
to a sampling frequency ωs,2 .

5
z[ n4 ] if n is a multiple of 4

v2 [n] =
0 elsewhere

The oversampling technique is illustrated in Table 6 for a given signal z[n].

z2[n]=
z[n] v2[n] d2[n] y2(t)
oversam ZOH F(ω) x2(t)
G(Ω)
pling T s,2

digital part analog part

Table 5: Reconstruction mechanism

5 5

4.5 4.5

4 4

3.5 3.5

3 3

2.5 2.5

2 2

1.5 1.5

1 1

0.5 0.5

0 0
0 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
n [samples] n [samples]

Table 6: Left: signal z[n]; Right: oversampled signal v2 [n] corresponding to z[n]

d. Determine the DTFT V2 (Ω) of v2 [n] as a function of the DTFT Z(Ω) of the signal z[n]
stored on the CD (z[n] = z(t = nTs )) and compare these two DTFT’s in a plot.

Based on the oversampled signal v2 [n], we can generate the desired signal z2 [n] by filtering
v2 [n] with a discrete-time low-pass filter with cut-off frequency Ωcut = π4 and with an ampli-
fication gain of TTs,2
s
= 4. The filter G(Ω) for this filtering operation has thus the following
frequency response in its main interval [−π π]:

4 for − π4 ≤ Ω ≤ π4

G(Ω) =
0 elsewhere in the main interval

6
We choose an ideal filter for G(Ω). It is here not a large oversimplification since, unlike for
continuous-time filter, it is very easy to design a discrete-time filter which is (almost) ideal6 .

e. Denote by d2 [n], the signal obtained by filtering v2 [n] by the filter G(Ω). Show that
the signal d2 [n] is equal to the signal z2 [n] that would be obtained by sampling the
music signal z(t) at the sampling frequency ωs,2 (z2 [n] = z(t = nTs,2 )).

Now, since we could generate d2 [n] = z2 [n] from the stored signal z[n] via oversampling+filtering,
it is straightforward to generate the signal x2 (t) that can be sent to the loudspeaker i.e. we
have just to follow the procedure described above item [c.]. The full reconstruction mecha-
nism for the CD technology is thus as represented in Table 5.

6
The main approximation here is that an actual discrete-time filter will introduce a small delay. This
small delay is necessary to make the filter causal (see Chapter 10 of the book for more details).

7
Solutions
Exercise 8.

8.a. In order to be a good signal to go to the loudspeaker, the signal y(t) should fulfill the
two following conditions:

1. |Y (ω)| ≈ |Z(ω)| for |ω| < 120 krad/s

2. |Y (ω)| < 0.005 for all |ω| > 120 krad/s.

For the first condition, recall that Z̄s (ω) = X(ω) for |ω| < 120 krad/s. Since Y (ω) =
H̄ZOH Z̄s (ω), the first condition is met if H̄ZOH (ω) ≈ 1 for |ω| < 120 krad/s. Since
|H̄ZOH (ω = 120 krad/s)| = 0.8, we can hardly say that the first condition is met, but
as we will now see, the biggest problem is the second condition. The second condition is
indeed definitely NOT met since the high frequency components are not sufficiently reduced
by H̄ZOH (ω). Indeed, we do not have that |Y (ω)| < 0.005 for all |ω| > 120 krad/s. For
example, in ω = 200 krad/s, |Y (ω)| = 0.07.

8.b. The second condition (i.e. |X(ω)| < 0.005 for all |ω| > 120 krad/s) is not met. Indeed,
for example, since |Y (ω = 200 krad/s)|| = 0.07, the gain of the filter F (ω) at ω = 200
krad/s should be smaller than 0.005
0.07
= 0.071. This is not the case since |F (ω)| in ω = 200
krad/s is ≈ 0.4.

8.c. For the case of a sampling frequency ωs,2 = 1040 krad/s, the DTFT Z2 (Ω) of z2 [n]
expressed as a function of the normalized frequency Ω can also be expressed as a function

of the actual frequency ω = TΩs,2 . This yields Zs,2 (ω) = Z2 (Ω = ωTs,2) which is periodic with
period ωs,2. For simplicity,
P∞ we introduce the notation Z̄s,2 (ω) = Ts,2 Zs,2 (ω). The function
Z̄s,2(ω) is equal to k=−∞ Z(ω − kωs,2) with Z(ω) the continuous-time Fourier transform of
z(t). Consequently, Z̄s,2 (ω) is entirely real (Z(ω) being entirely real) and can be represented
as in the left part of Table 7. In this left part, Z̄s,2 (ω) is compared with |H̄ZOH,2(ω)| which was
also given in the right part of Table 4. Since Y2 (ω) = HZOH,2(ω)Zs,2(ω) = H̄ZOH,2(ω)Z̄s,2(ω),
the modulus of the Fourier transform Y2 (ω) of the signal y2 (t) has the shape given in blue
solid in the right part of Table 7. The signal y2 (t) is then filtered by F (ω); this delivers x2 (t)
whose Fourier transform X2 (ω) is thus equal to F (ω)Y2(ω). Consequently, we have that:

X2 (ω) = F (ω) H̄ZOH,2(ω)Z̄s,2(ω)


| {z }
=Y2 (ω)

For the sake of completion, the modulus of F (ω) is also reproduced in the right part of
Table 7.

To be a good signal for the loudspeaker, the modulus of X2 (ω) should be such that:

1. |X2 (ω)| ≈ |Z(ω)| for |ω| < 120 krad/s

2. |X2 (ω)| < 0.005 for all |ω| > 120 krad/s.

8
The first condition is met since both |H̄ZOH,2(ω)| and |F (ω)| are approximately equal to one
for |ω| < 120 krad/s (see Table 7). The second condition is also met. Indeed, we observe
that
• the highest value of |Y2 (ω)| for |ω| > 120 krad/s is obtained at ω ≈ 950 krad/s and is
equal to ≈ 0.015
• |Y2 (ω)| = 0 between 120 krad/s and 1040 − 120 = 920 krad/s

• |Y2 (ω)| ≤ 0.015 for all |ω| > 920 krad/s.


0.005
Consequently, the second condition will be met for x2 (t) if |F (ω)| is smaller than 0.015 = 13
for all |ω| > 920 krad/s. As can be seen in red dashed in the right part of Table 7, this is
indeed the case.

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3
|Y(ω=950 krad/s)|=0.015
0.2 0.2

0.1 0.1

0 0
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
ω [krad/s] ω [krad/s]

Table 7: Left: Z̄s,2 (ω) = Ts,2 Zs,2 (ω) in the frequency range [0 2500] krad/s (blue solid) and
|H̄ZOH,2(ω)| (red dashed); Right: |Y2 (ω)| (blue solid) and |F (ω)| (red dashed)

8.d. The DTFT V2 (Ω) of v2 [n] is defined as:



X
V2 (Ω) = v2 [n] e−jΩn
n=−∞

Since v2 [n] is only nonzero when n is a multiple of 4, the DTFT V2 (Ω) can be rewritten as:

V2 (Ω) = ... + v2 [−8] ej8Ω + v2 [−4] ej4Ω + v2 [0] + v2 [4] e−j4Ω + v2 [8] e−j8Ω
X∞
= v2 [4m] e−j4mΩ
| {z }
m=−∞
=z[m]
P∞
Recalling that Z(Ω) = m=−∞ z[m] e−jΩm , we conclude that:

9
V2 (Ω) = Z(4Ω)

Based on this formula, we see that the shape of V2 (Ω) can be deduced from the shape of
Z(Ω). Consequently, let us first represent the DTFT Z(Ω) of z[n] = z(t = nTs ). We know
that Z(Ω) = Zs (ω = TΩs ) and Zs (ω) is represented in Table 3. Consequently, we know also
that:

• Like Zs (ω), Z(Ω) is entirely real

• More importantly, since Zs (ω) is, in its main interval [−130 130] krad/s, nonzero
for |ω| < 120 krad/s, the DTFT Z(Ω) is, in its main interval [−π π], nonzero for
|Ω| < 120Ts = 24π
26
= 2.9.

• Z(Ω) is periodic of period 2π while Zs (ω) is periodic with period ωs = 260 krad/s

• Like for Zs (ω), there is a scaling factor T1s between the DTFT Z(Ω) of the sampled
signal z[n] and the Fourier Transform Z(ω) of the continuous-time signal z(t). In other
words, Z(Ω = 0) = 0.45
Ts
since Z(ω = 0) = 0.45.

We can thus represent Z(Ω) for the normalized frequency interval [0 3π] as in the left part
of Table 8 (blue dashed). From the representation of Z(Ω), we can then easily deduce the
shape of V2 (Ω) via the formula V2 (Ω) = Z(4Ω). This leads to the plot in red solid in the left
part of Table 8.

It is very important to note that, in the case of V2 (Ω), the normalized frequency Ω cor-
responds to an actual frequency ω = TΩs,2 since v2 [n] is a discrete-time signal corresponding
to a sampling period Ts,2 !!!

8.e. The filter G(Ω) is represented in the right part of Table 8 (blue dashed) and is com-
pared to V2 (Ω). Consequently, the signal d2 [n] has a DTFT D2 (Ω) equal to G(Ω)V2 (Ω). The
DTFT D2 (Ω) of d2 [n] is thus as represented7 in Table 9. We will now show that this DTFT
D2 (Ω) is equal to the DTFT Z2 (Ω) of z2 [n] for all Ω. Having proved this, we will have also
proven that z2 [n] = d2 [n].

To show that D2 (Ω) = Z2 (Ω) for all Ω, we consider first Table 7 where Zs,2(ω) = Z2 (Ω =
ωTs,2) is represented (after multiplication by Ts,2). If we replace the actual frequency ω in
this graph by the normalized frequency Ω = ωTs,2, we obtain exactly the same figure as
the one of D2 (Ω). For example, in Table 7, we see that we have a shifted version of Z(ω)
centered at ωs,2 = 1040 krad/s. This frequency corresponds to the normalized frequency
Ω = 1040 Ts,2 = 2π and we effectively see that D2 (Ω) presents the same pattern around
Ω = 2π = 6.28.

Remark. Via the oversampling and the subsequent filtering, we have thus been able to
retrieve the values of z(t) at t = nTs,2 = n T4s from the values of z(t) at t = nTs i.e. we
7
In fact, in this table, we represent Ts,2 D2 (Ω).

10
G(Ω=0)=Ts2/Ts

Z(Ω=0)=V2(Ω=0)=0.45/Ts
V (Ω=0)=0.45 / T
2 s

1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 Ω=π/4 Ω
Ω Ω=2π−π/4 Ω=2π+π/4

Table 8: Left: Z(Ω) (blue dashed) and V2 (Ω) (red solid) in the interval [0 3π]; Right: G(Ω)
(blue dashed) and V2 (Ω) (red solid) in the interval [0 3π]

have been able to retrieve values of z(t) in between the available samples of z[n]. This seems
magical, but it is not. Indeed, you have to recall that we have respected Shannon’s theorem
when sampling the signal z(t). Consequently, z[n] contains all the information present in
z(t).

11
1

0.8

0.6

0.4

0.2

0
1 2 3 4 5 6 7 8 9

Table 9: Ts,2 D2 (Ω) in the interval [0 3π]

12
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 6: STOCHASTIC PROCESSES

The values y[n] taken by a stochastic process y at each time-instant n vary at each experi-
ment/realization. However, each realization of y presents the same global characteristics i.e.
each realization will for example oscillate around the same value and will have the same fre-
quency content.... In this session, we will restrict attention to a particular type of stochastic
processes: the (wide-sense) stationary stochastic processes. As the term stationary indicates,
a wide-sense stationary (WSS) stochastic process is a stochastic process for which the mean
Ey[n] and the quantity E(y[n]y[n − τ ]) (for arbitrary τ ) are independent of the time instant
n where they are evaluated.

Given two (jointly) stationary stochastic processes y[n] and x[n], we have the following
definitions:

• The variance of y[n] (constant over n) is defined as E(y[n] − Ey[n])2

• cross-correlation function: Ryx [τ ] = E (y[n]x[n − τ ])

• auto-correlation function: Ry [τ ] = E (y[n]y[n − τ ])

• The power of a stochastic process is defined as Ey2 [n] = Ry [0]

• The power spectral density function Φy (Ω) is defined as the DTFT of Ry [τ ] i.e.

+∞
X
Φy (Ω) = Ry [τ ]e−jΩτ
τ =−∞

When y is obtained by filtering the WSS stochastic process x with a filter whose transfer
function is given by H(z), we have that:

Φy (Ω) = |H(z = ejΩ )|2 Φx (Ω) expression (3.26) in the lecture notes

Moreover, if both y and x are zero-mean WSS processes, we have that:

• the value of y[n] at time n is not (cor)related in any way to the value of x[n − τ ] (τ
fixed) =⇒ Ryx [τ ] = 0

• the signals y[n] and x[n] are independent =⇒ Ryx [τ ] = 0 ∀τ

A very common stationary stochastic process is the white noise process e[n]. A white noise
is characterized by its variance σe2 and is defined as a stochastic process for which the sample
at time-instant n is uncorrelated with the samples at other time-instants. At time-instant

1
n, e[n] is a zero-mean random variable with variance σe2 . The properties of a white noise
of variance σe2 are thus:

Ee[n] = 0

σe2 for τ = 0

Re [τ ] =
0 elsewhere

Exercise 1. Consider the discrete-time stationary stochastic process y[n] that is generated
as:

y[n] = 10 + e[n]

with e[n] a (zero-mean) white noise of variance σe2 = 4. In Figure 1, two realizations of this
stochastic process are represented for 0 ≤ n ≤ 4.
15
y[n] (first realization)

10

0
−1 0 1 2 3 4 5
n

15
y[n] (second realization)

10

0
−1 0 1 2 3 4 5
n

Figure 1: Upper plot: first realization. Bottom plot: second realization.

a. Why are the two realizations in Figure 1 different?

b. What is the mean of y[2]? What is the mean y[n] for the other values of n?

c. What is the variance of y[2]? What is the variance y[n] for the other values of n?

d. Does Figure 1 correspond to what is found in items [b.] and [c.]? Explain why.

e. Determine the auto-correlation function Ry [τ ] of y[n] for each value of τ .

f. Determine the power of y[n]

2
Exercise 2 (examination August 2006). Consider the discrete-time stationary stochastic
process y[n]. The process y[n] is generated as follows:

y[n] = e[n − 1] − a e[n − 2]


with a a given real scalar and e[n] a discrete-time (zero-mean) white noise with variance σe2 .

a. Determine the mean (i.e. the expected value) of y[n] for all values of n.
b. Determine the auto-correlation function Ry [τ ] of y[n] for all values of τ . Explain why
y[n] and y[n − 2] are uncorrelated i.e. why Ry [2] is equal to 0?
c. Determine the cross-correlation function Rye [τ ] between y[n] and e[n] for all values
of τ .
d. Determine the power of e[n] and y[n].
e. Determine the power spectral density function Φy (Ω) of y[n] and the power spectral
density function Φe (Ω) of e[n]
f. For stochastic processes, the theorem of Parseval says that the power of a stochastic
process y is equal to:
Z π
1
Φy (Ω) dΩ
2π −π

Show that this theorem holds for both y and e using what has been found in items
[d.] and [e.].

v[n]

H2

e[n]
H1
+ y[n]

Figure 2: Block schema for Exercise 3

Exercise 3 (Examination January 2007). Consider the discrete-time signal y[n] gener-
ated by the following difference equation:

y[n] = a y[n − 1] + b e[n] + c v[n] + d v[n − 1]


with e[n] and v[n] given discrete-time signals and a, b, c, d given real constants.

3
a. Determine the transfer functions H1 (z) and H2 (z) of the filters H1 and H2 in Figure 2
in such a way that this figure describes the difference equation generating y[n]. Hint:
use the time-shift property of the Z-transform.

b. What are the conditions on a, b, c and d so that the above difference equation rep-
resents a stable system? Hint: the system is stable if and only if both the transfer
functions H1 (z) and H2 (z) found in item [a.] are stable.

Consider the same equation but with a = 0.9, b = 1, c = d = 0 and e[n] a white noise
stochastic process with variance σe2 = 1. This delivers the following equation generating a
wide-sense stationary process y[n]

y[n] = 0.9 y[n − 1] + e[n]

c. Determine the power spectral density Φy (Ω) of y[n]

d. The stochastic process can be rewritten as

+∞
X
y[n] = β αk e[n − k]
k=0

for some α and β. Determine the value of α and the value of β.

e. Determine Rye [0] and Rye [−1] (for example based on the expression found in item [d.])

f. Determine the mean of y[n] (for example based on the expression found in item [d.])

g. The power spectral density spectrum Φy (Ω) found in item [c.] is represented in Figure 3
in its (main) interval [−π π] and is compared with Φe (Ω). In Figure 4, a realization of
y[n] and of e[n] are represented. We observe that y[n] has a smoother behaviour (i.e.
has a less erratic behaviour) than e[n]. Explain this behaviour based on the shape of
Φy (Ω).

h. What is the power of y[n] and e[n] when σe2 = 1? Can we deduce from Figure 4 that
the power of y is larger than the power of e?

Exercise 4 (examination February 2008). Consider the wide-sense stationary stochastic


process y generated as
3 1
y[n] = x[n − 2] + x[n − 4] + v[n]
4 2
where x and v are two white noise processes with variance σx2 and σv2 , respectively. Moreover,
we suppose that x and v are independent.

a. Determine the mean of y[n] for all values of n

4
2
10

Φ (Ω) (solid) and Φ (Ω) (dotted)


1
10

e
0
10
y

−1
10
−3 −2 −1 0 1 2 3

Figure 3: Φy (Ω) (solid) and Φe (Ω) (dotted) for σe2 = 1

2
y[n]

−2

−4

−6
0 50 100 150 200 250 300 350 400 450 500
n

2
e[n]

−2

−4

−6
0 50 100 150 200 250 300 350 400 450 500
n

Figure 4: Upper plot: a realization of y[n]. Bottom plot: a realization of e[n]. Both when
σe2 = 1.

5
b. Determine Ry [τ ] for τ = 0

c. Determine the power of y

d. Determine Ryx [τ ] for all values of τ

e. Explain why the random variable y[n] (i.e. the value taken by y at time instant n) is
not correlated with the random variable x[n] (i.e. the value taken by x at time instant
n) or, in other words, why Ryx [0] = 0

f. Prove the following property of the power spectral density: For a stochastic process
y[n] = x1 [n] + x2 [n], we have that

Φy (Ω) = Φx1 (Ω) + Φx2 (Ω)

when x1 [n] and x2 [n] are two independent (wide-sense) stationary stochastic processes.

g. Determine the power spectral density function Φy (Ω) of the signal y given above. Use
for this purpose the property given in item [f.].

6
Solutions.
Exercise 1.

1.a. y[n] is made up of the summation of a constant 10 (which does not change in different
realizations) and of a white noise process (which is different at each realization). Conse-
quently, for each value of n, the value taken by y[n] = 10 + e[n] in different realizations is
different.

1.b. For n = 2, we obtain successively:

Ey[2] = E (10 + e[2])


= E10 + Ee[2]
= 10 + 0 = 10
since e[2] is by definition a random variable with zero mean. The latter holds for each value
of n. Consequently, for an arbitrary value of n:

Ey[n] = E (10 + e[n])


= E10 + Ee[n]
= 10 + 0 = 10
1.c. The variance var(y) of a random variable y is defined as E(y − Ey)2 . The sample y[2]
is a random variable and its mean is equal to 10 (see item [b.]). Consequently, the variance
var(y[2]) of y[2] is given by:

var(y[2]) = E(y[2] − 10)2


= E(10 + e[2] − 10)2
= E(e[2])2
= σe2 = 4
since e[2] is a random variable of variance σe2 = 4. The latter holds for each value of n.
Consequently, for an arbitrary value of n:

var(y[n]) = E(y[n] − 10)2


= E(10 + e[n] − 10)2
= E(e[n])2
= σe2 = 4
1.d. By looking at Figure 1, we see effectively that, for each value of n,
p y[n] takes value
around 10 and that the standard deviation with respect to this mean is σe2 = 2.

7
1.e. For an arbitrary τ , we obtain:

Ry [τ ] = E (y[n] y[n − τ ])
= E ((10 + e[n]) (10 + e[n − τ ]))
= E (100 + 10 e[n] + 10 e[n − τ ] + e[n] e[n − τ ])
= E(100) + E(10 e[n]) + E(10 e[n − τ ]) + E(e[n] e[n − τ ])
= 100 + 0 + 0 + E(e[n] e[n − τ ])
= 100 + Re [τ ]

where the last step follows from the definition of the auto-correlation function. Recalling
now that e[n] is a white noise, we obtain:

100 + σe2 for τ = 0



Ry [τ ] =
100 elsewhere

1.f. The power of a stochastic process is defined as Ry [0]. The power is thus equal to 100+σe2 .

Exercise 2.

2.a. The mean of y[n] can be computed as follows:

Ey[n] = E(e[n − 1] − ae[n − 2]) = Ee[n − 1] − aEe[n − 2] = 0 ∀n

since Ee[n] = 0 for all n for a white-noise e[n].

2.b. The auto-correlation function of e[n] can be computed as follows:

Ry [τ ] = Ey[n]y[n − τ ]
= E ((e[n − 1] − ae[n − 2])(e[n − 1 − τ ] − ae[n − 2 − τ ]))
= Ee[n − 1]e[n − 1 − τ ] − aEe[n − 1]e[n − 2 − τ ] − aEe[n − 2]e[n − 1 − τ ] + a2 Ee[n − 2]e[n − 2 − τ ]
= (1 + a2 )Re (τ ) − aRe (τ + 1) − aRe (τ − 1)

Now, by remembering that Re [τ ] = Ee[n]e[n−τ ] for a white-noise e[n] is given by Re [0] = σe2
and Re [τ ] = 0 elsewhere, we obtain:

 (1 + a2 )σe2 f or τ = 0
Ry [τ ] = −aσe2 f or τ = ±1
0 elsewhere

We see that Ry [τ = 2] = 0. To explain why it is logical, observe that Ry [τ = 2] =


Ey[n]y[n − 2] with y[n] = e[n − 1] − ae[n − 2] and y[n − 2] = e[n − 3] − ae[n − 4]. We see
that y[n] and y[n − 2] are generated by different elements of a white noise sequence (e[n − 1]

8
and e[n − 2] for y[n] and e[n − 3] and e[n − 4] for y[n − 2]). These different elements are by
definition uncorrelated. This explains why y[n] and y[n − 2] are uncorrelated and thus why
Ry [2] = 0.

2.c. The cross-correlation function of y[n] with e[n] can be computed as follows:

Rye [τ ] = Ey[n]e[n − τ ]
= E ((e[n − 1] − ae[n − 2])e[n − τ ])
= Ee[n − 1]e[n − τ ] − aEe[n − 2]e[n − τ ]
= Re (τ − 1) − aRe (τ − 2)

We obtain thus:
 2
 σe f or τ = 1
2
Rye [τ ] = −aσe f or τ = 2
0 elsewhere

2.d. The power of a stochastic signal is equal to its auto-correlation function evaluated at
τ = 0. Thus, the power of e[n] is Re [0] = σe2 and the power of y[n] is Ry [0] = (1 + a2 )σe2 .

2.e. The power spectral density function Φy (Ω) of y[n] is defined as the DTFT of Ry [τ ] i.e.



X
Φy (Ω) = Ry [τ ]e−jΩτ
τ =−∞

= (−aσe2 )ejΩ + ((1 + a2 )σe2 ) + (−aσe2 )e−jΩ


= (1 + a2 )σe2 − 2aσe2 cos(Ω)

where the last equality follows from the fact that ejΩ + e−jΩ = 2cos(Ω). As expected, the
power spectral density function Φy (Ω) is periodic with period 2π.

The power spectral density function Φe (Ω) of e[n] is defined as the DTFT of Re [τ ] i.e.



X
Φe (Ω) = Re [τ ]e−jΩτ
τ =−∞

= σe2 ∀Ω

where we made use of the fact that e0 = 1 and that


 2
σe for τ = 0
Re [τ ] =
0 elsewhere

9
2.f. Let us first perform the integral for Φy (Ω):

π π
1 1
Z Z
(1 + a2 )σe2 − 2aσe2 cos(Ω) dΩ

Φy (Ω) dΩ =
2π −π 2π −π
π
2aσe2
Z
2
= (1 + a )σe2 − cos(Ω) dΩ
2π −π
| {z }
=0
2
= (1 + a )σe2

which is indeed what we found in item [d.] for the power of y. Now, let us proceed with the
integral of Φe (Ω):

π π
1 1
Z Z
Φe (Ω) dΩ = σe2 dΩ
2π −π 2π −π

= σe2

which is indeed what we found in item [d.] for the power of e.

Exercise 3

3.a. Using the shift in time property of the Z-transform, we obtain that

Y (z) = az −1 Y (z) + bE(z) + cV (z) + dz −1 V (z)

with Y (z), E(z) and V (z), the Z-transforms of y[n], e[n] and v[n], respectively. Since we are
looking for the transfer functions such that Y (z) = H1 (z)E(z) + H2 (z)V (z), we obtain:
b c + dz −1
Y (z) = −1
E(z) + −1
V (z)
1 − az
| {z } 1 − az
| {z }
=H1 (z) =H2 (z)

3.b. The system is stable if and only if both H1 (z) and H2 (z) are stable transfer functions.
These transfer functions can be rewritten as follows:
bz cz + d
H1 (z) = H2 (z) =
z−a z−a
We see that the unique pole of both H1 (z) and H2 (z) is in z = a. Consequently, the only
condition for stability is that |a| < 1 i.e −1 < a < 1. No extra condition on b, c, d are needed
for stability.

3.c. Using formula (3.26) in the lecture notes, we have that:

Φy (Ω) = |H1 (Ω)|2 Φe (Ω) = |H1 (Ω)|2

since, for a white noise e, Φe (Ω) = σe2 and σe2 = 1 in this case. Now, using what has been
found in item [a.], we obtain:

10
2
1
Φy (Ω) =
1 − 0.9e−jΩ
2
1
=
1 − 0.9 (cos(Ω) − j sin(Ω))
2
1
=
1 − 0.9cos(Ω) + j 0.9 sin(Ω)

1
= 2
(1 − 0.9cos(Ω)) + (0.9 sin(Ω))2
3.d. Using the fact that y[n] is the result of the convolution of h[n] (i.e. the pulse response
of the filter H1 (see item [a.])) and e[n], we obtain that:
+∞
X
y[n] = h[k]e[n − k]
k=−∞

Now, noticing that h[n] = (0.9)n u[n] (see Table 4.1 on page 177 of the book), we obtain:
+∞
X
y[n] = (0.9)k e[n − k]
k=0

since u[n] = 0 for n < 0. Consequently, we have α = 0.9 and β = 1.

3.e. We have:

Rye [0] = Ey[n]e[n]


+∞
! !
X
= E (0.9)k e[n − k] e[n]
k=0
+∞
X
= (0.9)k E (e[n − k]e[n])
k=0

(0.9)0 E (e[n]e[n]) + (0.9)1 E (e[n − 1]e[n]) + (0.9)2 E (e[n − 2]e[n]) + ...



=
= σe2 = 1

Rye [−1] = Ey[n]e[n + 1]


+∞
! !
X
= E (0.9)k e[n − k] e[n + 1]
k=0
+∞
X
= (0.9)k E (e[n − k]e[n + 1])
k=0

(0.9)0 E (e[n]e[n + 1]) + (0.9)1 E (e[n − 1]e[n + 1]) + (0.9)2 E (e[n − 2]e[n + 1]) + ...

=
= 0

11
3.f. Using the relation found in item [d.], we have that:

+∞
!
X
k
Ey[n] = E (0.9) e[n − k]
k=0
+∞
X
= (0.9)k E (e[n − k])
k=0
= 0

since Ee[n] = 0 for all n when e is a white noise.

3.g. That y[n] varies more smoothly (shows more memory) than e[n] is logical since the
value of y at time instant n is directly dependent on the value of y at time instant n − 1.
Indeed, y[n] = 0.9y[n−1]+ e[n]. The white noise e[n] has a more erratic behaviour since the
value of e[n] at time instant n is independent of the value of e[n] at previous and future time
instants. This has direct consequence on the shape of their power spectrum. Consequently,
that y[n] varies more smoothly than e[n] can be explained based on the shape of Φy (Ω) of
y[n] in Figure 3. Indeed, in this figure, we see that the power of y[n] is strongly concentrated
in the low frequencies (recall that Φy (Ω) is represented in a logarithmic scale) as opposed to
e[n] whose power is spread over the whole frequency range (Φe (Ω) is indeed nonzero for all
Ω) and we know that a signal whose power is located in the low frequencies range is a signal
which varies more smoothly.

3.h. The power of y is by definition equal to Ry [0] which is given by:

Ry [0] = Ey[n]y[n]
= E (0.9 y[n − 1] + e[n])2
= E (0.9)2 y[n − 1]y[n − 1] + 1.8 y[n − 1]e[n] + e[n]e[n]


= (0.9)2 E(y[n − 1]y[n − 1]) + 1.8 E(y[n − 1]e[n]) + E(e[n]e[n])


= (0.9)2 Ry [0] + 1.8 Rye [−1] + Re [0]
= (0.9)2 Ry [0] + 0 + σe2

where we made use of what has been found in item [e.]. It yields:

Ry [0] = (0.9)2 Ry [0] + σe2

=⇒ (1 − (0.9)2 ) Ry [0] = σe2

σe2
=⇒ Ry [0] = ≈ 5.26 σe2
1 − (0.9)2

12
When σe2 = 1, the power of y is thus equal to 5.26 while the power of e is by definition
equal to σe2 = 1. In Figure 4, we see that the power of y is larger since the values taken by
y[n] for different n are spread over a wider (larger) range of amplitudes.

Exercise 4.

4.a. We have for all values of n that


3 1
Ey[n] = Ex[n − 2] + Ex[n − 4] + Ev[n] = 0 + 0 + 0 = 0
4 2
since x and v are white noises and thus zero-mean.

4.b. We have that:

Ry [τ = 0] = Ey[n]y[n]
 
9 2 1 2 2 3 3
= E x [n − 2] + x [n − 4] + v [n − 2] + x[n − 2]x[n − 4] + x[n − 2]v[n] + x[n − 4]v[n]
16 4 4 2
9 1
= Rx [0] + Rx [0] + Rv [0] + 0 + 0 + 0
16 4

Using now the fact that e and v are both white noises, we obtain
13 2
Ry [0] = σx + σv2
16
13 2
4.c. The power of y is by definition equal to Ry [0]. Thus: the power of y is σ
16 x
+ σv2 .

4.d. For an arbitrary value of τ , we have that:

Ryx [τ ] = Ey[n]x[n − τ ]
 
3 1
= E ( x[n − 2] + x[n − 4] + v[n])(x[n − τ ])
4 2
3 1
= E(x[n − 2]x[n − τ ]) + E(x[n − 4]x[n − τ ]) + E(v[n]x[n − τ ])
4 2
3 1
= Rx [τ − 2] + Rx [τ − 4] + Rvx [τ ]
4 2
3 1
= Rx [τ − 2] + Rx [τ − 4] + 0
4 2
 3 2
 4 σx for τ = 2
1 2
= σ for τ = 4
 2 x
0 elsewhere

since x is a white noise.

13
4.e. We first observe that y[n] is a linear combination of the random variables x[n − 2],
x[n − 4] and v[n]. Second, we observe that, since x is white, the random variable x[n] is
not correlated with the previous samples of x: in particular with x[n − 2], x[n − 4]. The
random variable x[n] is also not correlated with v[n] since v and x are independent. Com-
bining the previous observations implies that y[n] and x[n] are uncorrelated and thus that
Rye [0] = Ey[n]e[n] = 0.

4.f. We must prove the following: for a stochastic process y[n] = x1 [n] + x2 [n], we have that

Φy (Ω) = Φx1 (Ω) + Φx2 (Ω)

when x1 [n] and x2 [n] are two independent (wide-sense) stationary processes. For this pur-
pose, let us recall that
+∞
X
Φy (Ω) = Ry [τ ]e−jΩτ
τ =−∞

where Ry [τ ] = E(y[n]y[n − τ ]). Using the fact that y[n] = x1 [n] + x2 [n], we can rewrite
Ry [τ ] as follows:

Ry [τ ] = E (x1 [n]x1 [n − τ ] + x1 [n]x2 [n − τ ] + x2 [n]x1 [n − τ ] + x2 [n]x2 [n − τ ]) = Rx1 [τ ]+Rx2 [τ ]

where the last equality follows from the fact that x1 [n] and x2 [n] are independent. Conse-
quently:

+∞
X
Φy (Ω) = (Rx1 [τ ] + Rx2 [τ ]) e−jΩτ
τ =−∞
+∞
! +∞
!
X X
= Rx1 [τ ]e−jΩτ + Rx2 [τ ]e−jΩτ
τ =−∞ τ =−∞

= Φx1 (Ω) + Φx2 (Ω)

4.g. We can decompose the signal y[n] in this exercise in the sum of two independent signals.
Indeed, y[n] = x1 [n] + x2 [n] with
3 1
x1 [n] = x[n − 2] + x[n − 4]
4 2

x2 [n] = v[n]

These signals x1 [n] and x2 [n] are indeed independent since x[n] and v[n] are independent.

Based on the above decomposition, we can conclude that:

Φy (Ω) = Φx1 (Ω) + Φx2 (Ω)

14
Let us now compute Φx1 (Ω) and Φx2 (Ω). The latter can be easily determined:

Φx2 (Ω) = Φv (Ω) = σv2

since v is a white noise of variance σv2 (see e.g. exercise 2, item [e.]).

The signal x1 [n] is given by x1 [n] = 43 x[n − 2] + 12 x[n − 4]. The transfer function H(z)
between x and x1 is H(z) = 34 z −2 + 21 z −4 . Using now formula (3.26) of the lecture notes, we
obtain:

Φx1 (Ω) = |H(Ω)|2 Φx (Ω)


| {z }
=σx2

where H(Ω) = H(z = ejΩ ) is the frequency response of H(z) and |.| denotes the modulus.
We have thus:
   
13 3 j2Ω −j2Ω 2 13 3
Φx1 (Ω) = + (e + e ) σx = + cos(2Ω) σx2
16 8 16 4

Finally, we have thus:


 
13 3
Φy (Ω) = Φx1 (Ω) + Φx2 (Ω) = + cos(2Ω) σx2 + σv2
16 4

15
Course: WB3250 Signaalanalyse (2007-2008)
Exercise session 7: WSS STOCHASTIC PROCESSES (cont’d)

The values y[n] taken by a WSS stochastic process y at each time-instant n vary at each
experiment/realization. However, each realization of y presents the same global character-
istics i.e. each realization will for example oscillate around the same value and will have the
same frequency content....

The power distribution of a WSS stochastic process y over the frequency (i.e. its fre-
quency content) is represented by its power spectral density function Φy (Ω). For a white
noise y, the power spectral density function Φy (Ω) is equal to the variance of the white noise.

When y is obtained by filtering the WSS stochastic process x with a filter whose transfer
function is given by H(z), we have that:

Φy (Ω) = |H(z = ejΩ )|2 Φx (Ω) expression (3.26) in the lecture notes

For a stochastic process y[n] = x1 [n] + x2 [n], we have that

Φy (Ω) = Φx1 (Ω) + Φx2 (Ω)

when x1 [n] and x2 [n] are two independent WSS stochastic processes.

For WSS stochastic processes, the theorem of Parseval says that the power1 Py of a
stochastic process y can be related to its power spectral density function:
Z π
1 1 π
Z
∆ 2
Py = Ey [n] = Φy (Ω) dΩ = Φy (Ω) dΩ (the last equality follows from Φy (Ω) = Φy (−Ω))
2π −π π 0
Exercise 1. In this exercise, we consider a continuous-time system in the following setup:

x[n] ZOH x(t) Continuous y(t) Sampling y[n]


Ts system Ts

G(z)
This system represents a chemical plant producing a certain type of crystals. The output
y(t) is the discrepancy between the required size for these crystals and the actual size of the
crystals produced by the plant. Consequently, we wish y(t) to be as close as possible to zero.

In order to verify how close/far y(t) is from zero, we measure y(t); this is done via a sampling
mechanism yielding the discrete-time signal y[n] = y(t = nTs ) for some given Ts . The crystal
1
The power can also be computed as the auto-correlation function for τ = 0 i.e. Ry [0].

1
size (and thus y(t)) is influenced by the amount of reactants that are fed into the plant. This
amount is here given by Q + x(t) where Q is a given constant and x(t) a variable quantity
that can be considered as the input of the system (see the above schema). As also shown in
this schema, the continuous-time input x(t) is constructed via a ZOH based on a discrete-
time signal x[n]. Consequently, the considered system can also be seen as a discrete-time
system with input x[n] and output y[n]. This discrete-time system has a transfer function
G(z) which is supposed given. It is important to note that y[n] is not only influenced by
x[n] but by several process disturbances too. We have thus the following situation:

v[n]

x[n] + y[n]
G(z)
+

In order to have an idea of the importance of those disturbances, we have experimented the
plant with x[n] = 0 and measured the corresponding output y[n] = v[n]. The measurement
y[n] = v[n] is represented in Table 1 (left part). With a spectral analysis (see session 5), we
have also observed that v[n] can be modeled as a zero-mean WSS stochastic process whose
power spectral density function Φv (Ω) has the shape given in Table 1 (right part).

0.1

1
0.08

0.06
0.8
0.04

0.02
0.6
Φv(Ω)
v[n]

−0.02
0.4

−0.04

−0.06 0.2

−0.08

−0.1 0 −3 −2 −1 0
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 10 10 10 10
n [samples] Ω

Table 1: Left: 10000 samples of a realization of the disturbance v[n]; Right: Φv (Ω)

The variations y[n] = v[n] observed when keeping x[n] = 0 are much too large. We
therefore decide to reduce the influence of v[n] on the output y[n] with a feedback controller.
The closed-loop control setup is represented in Table 2 (top left). As can be seen in this
schema, the (digital) controller C(z) uses the measurement y[n] of the output to compute
the optimal x[n] for the compensation of v[n]. The closed-loop control setup is represented
in Table 2 (top left). The modulus of the frequency response of the three main closed-loop
transfer functions are also represented in this table. These transfer functions are very impor-
tant to understand the properties of the closed loop as we will show in this exercise. The first
closed-loop transfer function is the so-called sensitivity function S(z) = (1 + C(z)G(z))−1 .
The second and third transfer functions are C(z)S(z) and the so-called complementary sen-

2
sitivity function C(z)G(z)S(z).

1
10

0
10

|S(Ω)|
−1
v[n] 10

0 + x[n] + y[n]
C(z) G(z)
- +
−2
10 −3 −2 −1 0
10 10 10 10

0 1
10 10

0
10

−1
10
−1
10
|C(Ω) G(Ω) S(Ω)|

−2
10
|C(Ω)S(Ω)|

−3
10

−2
10
−4
10

−5
10

−3 −6
10 −3 −2 −1 0
10 −3 −2 −1 0
10 10 10 10 10 10 10 10
Ω Ω

Table 2: Top Left: closed-loop setup; Top Right: |S(ejΩ )|; Bottom Left: |C(ejΩ )S(ejΩ )|;
Bottom Right: |C(ejΩ )G(ejΩ )S(ejΩ )|

Important remark. Due to the fact that the disturbance v is modeled by a (zero-mean)
WSS stochastic process, the output of the plant and the control input are also WSS stochas-
tic processes for which the power content can be determined via the power spectral density
function. In this stochastic framework, an attenuation of the disturbance will be character-
ized by an output y which has a smaller power than v.

The closed-loop control setup leads to the desired result since the output y[n] has a much
smaller power than v[n] as shown in Table 3 (left part) and this happens with a control
input x[n] whose power is judged very reasonable (see the right part of Table 3).

a. Determine the power spectral density functions Φy (Ω) and Φx (Ω) of the output y[n]
and of the control input x[n] as a function of Φv (Ω).

3
0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02
y[n]

x[n]
0

−0.02
−0.02

−0.04
−0.04

−0.06
−0.06

−0.08
−0.08

−0.1
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 −0.1
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
n [samples] n [samples]

Table 3: Left: output y[n] in closed loop; Right: Control input x[n] in closed loop

b. Explain, based on Table 2 and on what has been found in item [a.], why the power Py
of y[n] and the power Px of x[n] are smaller than the power Pv of v[n].

Suppose now that the measurement of y(t) is subject to a quantization error (mea-
surement noise). Suppose thus that the output of the sensor does not deliver precisely
y[n] = y(t = nTs ), but y[n] + e[n] instead. Here, e[n] can be considered as a white noise of
variance σe2 = (0.005)2. This white noise e is independent of the process disturbance v. The
closed-loop setup becomes such as in Table 4 and a realization of the measurement noise
e[n] is also represented in this table. Note that e[n] is a relatively important measurement
noise since e[n] has a maximal amplitude of the order of the signal y[n] in Table 3.

We have simulated this closed-loop setup and obtained the output y[n] and the control
input x[n] given in the bottom part of Table 4. The values taken at each instants by
these signals are different from the ones in Table 3 (the realization of v[n] being different).
However, as far as the global characteristics are concerned, the results in Tables 3 and 4 are
similar. Consequently, the influence of the measurement noise seems negligeable.

c. Determine the power spectral density functions Φy (Ω) and Φx (Ω) of the output y[n]
and of the control input x[n] as a function of Φv (Ω) and Φe (Ω) = σe2 .

d. Explain, based on what has been found in item [c.] and on the hint below, why the
power of y[n] and x[n] are similar in Tables 3 and 4. Hint:

π
1
Z
|C(ejΩ )S(ejΩ )|2 dΩ = 0.001
2π −π

π
1
Z
|C(ejΩ )G(ejΩ )S(ejΩ )|2 dΩ = 0.038
2π −π

4
0.1

0.08

0.06

0.04

actual output
0.02

v[n]

e[n]
0

0 + x[n] + y[n] −0.02


C(z) G(z)
- + −0.04
+
e[n]
−0.06
+
−0.08

−0.1
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
measured output n [samples]

0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02
y[n]

x[n]
0 0

−0.02 −0.02

−0.04 −0.04

−0.06 −0.06

−0.08 −0.08

−0.1 −0.1
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
n [samples] n [samples]

Table 4: Top Left: closed-loop setup with measurement noise; Top Right: 10000 samples
of a realization of the disturbance e[n]; Bottom Left: output y[n] with measurement noise;
Bottom Right: Control input x[n] with measurement noise

e. It can very harmful for the actuators to have to vary at a very fast rate. Consequently,
besides a control input x[n] with small power, it is desired that this small x[n] does not
have a power distribution concentrated in high frequencies. In this example, we will
say that the high frequencies are the ones in the (normalized) frequency range [0.5 π].
Show that x[n] has indeed a small power contribution in this range.

Exercise 2 (radar range determination. Radars are devices which allow to detect an
object and to determine at which distance from the radar this object is located. For this
purpose, the radar emits a white noise x with variance σx2 . If an object is in the neighborhood,
the signal x is bounced back and comes back to the radar (with some attenuation K < 1).
Of course, the bounced back signal is not the only signal received by the radar: the radar
receives a lot of other signals that we will model by a WSS stochastic signal e independent
of x. Let us denote by y, the signal received by the radar. This signal is thus equal to:

y[n] = K x[n − c] + e

5
where c is an unknown integer. If we could determine this unknown integer c, we would
be able to determine the distance between the radar and the detected object. Indeed, if
we know the propagation speed v of x in the air, the distance is given by cT
2v
s
with Ts the
sampling period. In the sequel, we will show how the delay c can be determined.

a. Determine the cross-correlation function Ryx [τ ] = Ey[n]x[n − τ ] between y and x for
all τ

b. Determine, based on what has been found in item [a.], a method to determine the
unknown delay c

6
Solutions.
Exercise 1.

1.a. In order to determine Φy (Ω) and Φx (Ω) as a function of Φv (Ω), we will use expression
(3.26) of the lecture notes. For this purpose, we first need to determine the transfer functions
relating y (resp. x) and v. For this purpose, in order to be able to use the Z-transform
theory, we suppose that v is a deterministic signal for which the Z-transform V (z) exists
(see page 36 of the lecture notes)2 . Under this assumption, we can relate the Z-transforms
Y (z), X(z) and V (z) of the signals y[n], x[n] and v[n] as follows:
=X(z)
z }| {
Y (z) = G(z) (−C(z)Y (z)) +V (z)

1
=⇒ Y (z) = V (z) = S(z)V (z)
1 + C(z)G(z)
Equivalently, we deduce:

X(z) = −C(z)Y (z) = −C(z)S(z)V (z)

Now the transfer functions have been determined, we can go back to our stochastic framework
and use expression (3.26) of the lecture notes to deduce that:

Φy (Ω) = |S(z = ejΩ )|2 Φv (Ω)

Φx (Ω) = |C(z = ejΩ )S(z = ejΩ )|2 Φv (Ω)

1.b. As can be seen in Table 1, the process v[n] concentrates its power in the frequency
range [0 0.01]; Φv (Ω) is indeed negligeable outside this frequency range. As a consequence,
we have that:
1 0.01
Z
Pv ≈ Φv (Ω) dΩ
π 0

Now, recalling that Φy (Ω) = |S(z = ejΩ )|2 Φv (Ω), we can thus write that:
1 π
Z
Py = |S(ejΩ )|2 Φv (Ω) dΩ
π 0
Since Φv (Ω) is negligeable outside the frequency range [0 0.01], the last expression can be
approximated almost perfectly by:

1 0.01
Z
Py ≈ |S(ejΩ )|2 Φv (Ω) dΩ
π 0
2
This is just a trick so that everything is right mathematically. However, this trick is not important in
practice. We just want to determine the transfer function between v and y.

7
Note that 1 ≤ |S(ejΩ )| < 2 for all Ω ∈ [0.05 π], but Φv (Ω) is so negligeable at those frequen-
cies that this amplification does not change the previous conclusion.

Now, we observe that |S(ejΩ )| << 1 for all Ω ∈ [0 0.01]. Consequently, we conclude that:

1 0.01 1 0.01
Z Z
jΩ 2
Py ≈ |S(e )| Φv (Ω) dΩ << Φv (Ω) dΩ ≈ Pv
π 0 π 0
This is the desired behaviour since the output should be a small as possible.

The same reasoning can be done for x[n]. Indeed, |C(z = ejΩ )S(z = ejΩ )| < 1 for all Ω.

1.c. To deduce the transfer functions, like in item [a.], we assume that v and e are deter-
ministic and have a Z-transform. Under this assumption, we can relate the Z-transforms
Y (z), X(z) , V (z) and E(z) of the signals y[n], x[n], v[n] and e[n] as follows:

Y (z) = G(z) (−C(z)Y (z) − C(z)E(z)) + V (z)

Yv (z) Ye (z)
1 C(z)G(z) z }| { z }| {
=⇒ Y (z) = V (z) − E(z) = S(z)V (z) + (−C(z)G(z)S(z)E(z))
1 + C(z)G(z) 1 + C(z)G(z)

Equivalently, we deduce:

X(z) = −C(z) (G(z)X(z) + V (z) + E(z))

Xv (z) Xe (z)
−C(z) C(z) z }| { z }| {
=⇒ X(z) = V (z) − E(z) = −C(z)S(z)V (z) + (−C(z)S(z)E(z))
1 + C(z)G(z) 1 + C(z)G(z)

Coming back to the stochastic framework, we see that the output y is made of two addi-
tive contributions yv (obtained by filtering v by S(z)) and ye (obtained by filtering e by
−C(z)G(z)S(z)). Similarly, x is made of two additive contributions xv and xe due to v and
e, respectively.

Since yv is independent of ye AND xv is independent of xe (v and e being independent),


we can conclude that (see the property given in item [f.] of exercise 4 in session 6):

Φy (Ω) = Φyv (Ω) + Φye (Ω)

Φx (Ω) = Φxv (Ω) + Φxe (Ω)

and thus, using expression (3.26) of the lecture notes,:

Φy (Ω) = |S(ejΩ )|2 Φv (Ω) + |C(ejΩ )G(ejΩ )S(ejΩ )|2 σe2

8
Φx (Ω) = |C(ejΩ )S(ejΩ )|2 Φv (Ω) + |C(ejΩ )S(ejΩ )|2 σe2

since Φe (Ω) = σe2 .

It is important to note that yv (resp. xv ) is equivalent to y (resp. x) in item [a.] since,


in this item, e = 0.

1.d. Based on what has been deduced in item [c.], we can conclude that the power Py of
y[n] is given by :
Z π
1
Py = Pyv + |C(ejΩ )G(ejΩ )S(ejΩ )|2 σe2 dΩ
2π −π
| {z }
=Pye

Remember that Pyv is the power of the output when there is no measurement noise (i.e. like
in Table 3).

Using now the hint, we conclude that the power Py of y[n] is here thus given by:

Py = Pyv + 0.038 σe2


1
We see thus that the power Pye of ye is 0.038
= 26 times smaller than the power of e itself.
Consequently, we have that Py ≈ Pyv .

Remark. This is once again the desired property since we want y as small as possible and
thus we want the power of y to be as small as possible.

Using a similar reasoning, we can deduce that the power of x[n] is given by :

Px = Pxv + 0.001 σe2

Here the power Pxe of xe is 1000 times smaller than the power of e itself. Consequently, we
have that Px ≈ Pxv .

1.e. As deduced in item [c.]:

Φx (Ω) = |C(ejΩ )S(ejΩ )|2 Φv (Ω) + |C(ejΩ )S(ejΩ )|2 σe2

Let us analyze the contribution in [0.5 π] of both terms:

• the first term |C(ejΩ )S(ejΩ )|2 Φv (Ω) has really an absolutely negligeable contribution
in this range since both |C(ejΩ )S(ejΩ )| and Φv (Ω) are negligeable in this frequency
range.

9
• the second term |C(ejΩ )S(ejΩ )|2 σe2 is generated by a white noise which has thus equal
power content at each frequency. However, this white noise is here filtered by C(z)S(z)
which is a transfer function attenuating the components at each frequency, but espe-
cially the components at higher frequencies (see Table 2). Consequently, the contribu-
tion in the range [0.5 π] of this second term will be very small.

Exercise 2.

2.a. We have that:

Ryx [τ ] = Ey[n]x[n − τ ]
= E ((Kx[n − c] + e[n]) x[n − τ ])
= E (Kx[n − c]x[n − τ ] + e[n]x[n − τ ])
= K Ex[n − c]x[n − τ ] (since e and x are independent)
Kσx2 for τ = c

=
0 elsewhere

since x is a white noise.

2.b. We know both x (the signal we have emitted) and y (the received signal). Conse-
quently, a method to determine c is to compute Ryx [τ ] for each3 τ ≥ 0 and determine the
particular τ where this function is maximal (or is nonzero). This particular τ is then the
delay c.

Remark. In practice, we cannot compute exactly Ryx [τ ], but we can approximate this
function. Suppose that we have N samples of x and y. Then, a consistent estimate of
Ryx [τ ] for 0 ≤ τ ≤ N − 1 is
N −1
1 X
R̂yx [τ ] = y[n] x[n − τ ]
N n=τ

With this approximation, R̂yx [τ ] will not precisely be equal to zero for τ 6= c. However,
R̂yx [c] will be larger than R̂yx [τ ] for other τ (provided N is sufficiently large).

3
c is indeed positive !

10

Das könnte Ihnen auch gefallen